00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1999 00:00:00.001 originally caused by: 00:00:00.027 Started by upstream project "nightly-trigger" build number 3260 00:00:00.027 originally caused by: 00:00:00.027 Started by timer 00:00:00.027 Started by timer 00:00:00.122 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.123 The recommended git tool is: git 00:00:00.124 using credential 00000000-0000-0000-0000-000000000002 00:00:00.125 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.182 Fetching changes from the remote Git repository 00:00:00.185 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.245 Using shallow fetch with depth 1 00:00:00.245 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.245 > git --version # timeout=10 00:00:00.294 > git --version # 'git version 2.39.2' 00:00:00.294 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.320 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.320 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.228 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.239 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.250 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:08.250 > git config core.sparsecheckout # timeout=10 00:00:08.261 > git read-tree -mu HEAD # timeout=10 00:00:08.277 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:08.301 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:08.301 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:08.410 [Pipeline] Start of Pipeline 00:00:08.426 [Pipeline] library 00:00:08.428 Loading library shm_lib@master 00:00:08.428 Library shm_lib@master is cached. Copying from home. 00:00:08.443 [Pipeline] node 00:00:08.464 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:08.465 [Pipeline] { 00:00:08.473 [Pipeline] catchError 00:00:08.474 [Pipeline] { 00:00:08.483 [Pipeline] wrap 00:00:08.490 [Pipeline] { 00:00:08.496 [Pipeline] stage 00:00:08.497 [Pipeline] { (Prologue) 00:00:08.510 [Pipeline] echo 00:00:08.511 Node: VM-host-SM16 00:00:08.515 [Pipeline] cleanWs 00:00:08.524 [WS-CLEANUP] Deleting project workspace... 00:00:08.524 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.532 [WS-CLEANUP] done 00:00:08.738 [Pipeline] setCustomBuildProperty 00:00:08.834 [Pipeline] httpRequest 00:00:08.858 [Pipeline] echo 00:00:08.860 Sorcerer 10.211.164.101 is alive 00:00:08.866 [Pipeline] httpRequest 00:00:08.872 HttpMethod: GET 00:00:08.873 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:08.873 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:08.894 Response Code: HTTP/1.1 200 OK 00:00:08.895 Success: Status code 200 is in the accepted range: 200,404 00:00:08.895 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:30.829 [Pipeline] sh 00:00:31.110 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:31.127 [Pipeline] httpRequest 00:00:31.146 [Pipeline] echo 00:00:31.148 Sorcerer 10.211.164.101 is alive 00:00:31.157 [Pipeline] httpRequest 00:00:31.162 HttpMethod: GET 00:00:31.163 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:31.163 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:31.201 Response Code: HTTP/1.1 200 OK 00:00:31.201 Success: Status code 200 is in the accepted range: 200,404 00:00:31.202 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:08.736 [Pipeline] sh 00:01:09.017 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:11.562 [Pipeline] sh 00:01:11.873 + git -C spdk log --oneline -n5 00:01:11.873 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:11.873 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:11.873 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:11.873 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:11.873 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:11.895 [Pipeline] writeFile 00:01:11.913 [Pipeline] sh 00:01:12.195 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:12.207 [Pipeline] sh 00:01:12.488 + cat autorun-spdk.conf 00:01:12.488 SPDK_TEST_UNITTEST=1 00:01:12.488 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.488 SPDK_TEST_NVME=1 00:01:12.488 SPDK_TEST_BLOCKDEV=1 00:01:12.488 SPDK_RUN_ASAN=1 00:01:12.488 SPDK_RUN_UBSAN=1 00:01:12.488 SPDK_TEST_RAID5=1 00:01:12.488 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.495 RUN_NIGHTLY=1 00:01:12.497 [Pipeline] } 00:01:12.514 [Pipeline] // stage 00:01:12.533 [Pipeline] stage 00:01:12.535 [Pipeline] { (Run VM) 00:01:12.551 [Pipeline] sh 00:01:12.832 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:12.832 + echo 'Start stage prepare_nvme.sh' 00:01:12.832 Start stage prepare_nvme.sh 00:01:12.832 + [[ -n 0 ]] 00:01:12.832 + disk_prefix=ex0 00:01:12.832 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest ]] 00:01:12.832 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf ]] 00:01:12.832 + source /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf 00:01:12.832 ++ SPDK_TEST_UNITTEST=1 00:01:12.832 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.832 ++ SPDK_TEST_NVME=1 00:01:12.832 ++ SPDK_TEST_BLOCKDEV=1 00:01:12.832 ++ SPDK_RUN_ASAN=1 00:01:12.832 ++ SPDK_RUN_UBSAN=1 00:01:12.832 ++ SPDK_TEST_RAID5=1 00:01:12.832 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.832 ++ RUN_NIGHTLY=1 00:01:12.832 + cd /var/jenkins/workspace/ubuntu20-vg-autotest 00:01:12.832 + nvme_files=() 00:01:12.832 + declare -A nvme_files 00:01:12.832 + backend_dir=/var/lib/libvirt/images/backends 00:01:12.832 + nvme_files['nvme.img']=5G 00:01:12.832 + nvme_files['nvme-cmb.img']=5G 00:01:12.832 + nvme_files['nvme-multi0.img']=4G 00:01:12.832 + nvme_files['nvme-multi1.img']=4G 00:01:12.832 + nvme_files['nvme-multi2.img']=4G 00:01:12.832 + nvme_files['nvme-openstack.img']=8G 00:01:12.832 + nvme_files['nvme-zns.img']=5G 00:01:12.832 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:12.832 + (( SPDK_TEST_FTL == 1 )) 00:01:12.832 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:12.832 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:12.832 + for nvme in "${!nvme_files[@]}" 00:01:12.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:12.832 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.832 + for nvme in "${!nvme_files[@]}" 00:01:12.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:12.832 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.832 + for nvme in "${!nvme_files[@]}" 00:01:12.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:12.832 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:12.832 + for nvme in "${!nvme_files[@]}" 00:01:12.832 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:12.832 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.832 + for nvme in "${!nvme_files[@]}" 00:01:12.833 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:12.833 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.833 + for nvme in "${!nvme_files[@]}" 00:01:12.833 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:12.833 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.833 + for nvme in "${!nvme_files[@]}" 00:01:12.833 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:13.092 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.092 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:13.092 + echo 'End stage prepare_nvme.sh' 00:01:13.092 End stage prepare_nvme.sh 00:01:13.104 [Pipeline] sh 00:01:13.383 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:13.383 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -H -a -v -f ubuntu2004 00:01:13.383 00:01:13.383 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant 00:01:13.383 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk 00:01:13.383 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest 00:01:13.383 HELP=0 00:01:13.383 DRY_RUN=0 00:01:13.383 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img, 00:01:13.383 NVME_DISKS_TYPE=nvme, 00:01:13.383 NVME_AUTO_CREATE=0 00:01:13.383 NVME_DISKS_NAMESPACES=, 00:01:13.383 NVME_CMB=, 00:01:13.383 NVME_PMR=, 00:01:13.383 NVME_ZNS=, 00:01:13.383 NVME_MS=, 00:01:13.383 NVME_FDP=, 00:01:13.383 SPDK_VAGRANT_DISTRO=ubuntu2004 00:01:13.383 SPDK_VAGRANT_VMCPU=10 00:01:13.383 SPDK_VAGRANT_VMRAM=12288 00:01:13.383 SPDK_VAGRANT_PROVIDER=libvirt 00:01:13.384 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:13.384 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:13.384 SPDK_OPENSTACK_NETWORK=0 00:01:13.384 VAGRANT_PACKAGE_BOX=0 00:01:13.384 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:13.384 FORCE_DISTRO=true 00:01:13.384 VAGRANT_BOX_VERSION= 00:01:13.384 EXTRA_VAGRANTFILES= 00:01:13.384 NIC_MODEL=e1000 00:01:13.384 00:01:13.384 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt' 00:01:13.384 /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest 00:01:16.669 Bringing machine 'default' up with 'libvirt' provider... 00:01:16.669 ==> default: Creating image (snapshot of base box volume). 00:01:16.927 ==> default: Creating domain with the following settings... 00:01:16.927 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1720714613_4427ed9aa316973dfd51 00:01:16.927 ==> default: -- Domain type: kvm 00:01:16.927 ==> default: -- Cpus: 10 00:01:16.927 ==> default: -- Feature: acpi 00:01:16.927 ==> default: -- Feature: apic 00:01:16.927 ==> default: -- Feature: pae 00:01:16.927 ==> default: -- Memory: 12288M 00:01:16.927 ==> default: -- Memory Backing: hugepages: 00:01:16.927 ==> default: -- Management MAC: 00:01:16.927 ==> default: -- Loader: 00:01:16.927 ==> default: -- Nvram: 00:01:16.927 ==> default: -- Base box: spdk/ubuntu2004 00:01:16.927 ==> default: -- Storage pool: default 00:01:16.927 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1720714613_4427ed9aa316973dfd51.img (20G) 00:01:16.927 ==> default: -- Volume Cache: default 00:01:16.927 ==> default: -- Kernel: 00:01:16.927 ==> default: -- Initrd: 00:01:16.927 ==> default: -- Graphics Type: vnc 00:01:16.927 ==> default: -- Graphics Port: -1 00:01:16.927 ==> default: -- Graphics IP: 127.0.0.1 00:01:16.927 ==> default: -- Graphics Password: Not defined 00:01:16.927 ==> default: -- Video Type: cirrus 00:01:16.927 ==> default: -- Video VRAM: 9216 00:01:16.927 ==> default: -- Sound Type: 00:01:16.927 ==> default: -- Keymap: en-us 00:01:16.927 ==> default: -- TPM Path: 00:01:16.927 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:16.927 ==> default: -- Command line args: 00:01:16.927 ==> default: -> value=-device, 00:01:16.927 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:16.927 ==> default: -> value=-drive, 00:01:16.927 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:16.927 ==> default: -> value=-device, 00:01:16.927 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.185 ==> default: Creating shared folders metadata... 00:01:17.185 ==> default: Starting domain. 00:01:18.559 ==> default: Waiting for domain to get an IP address... 00:01:28.555 ==> default: Waiting for SSH to become available... 00:01:29.929 ==> default: Configuring and enabling network interfaces... 00:01:31.828 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:38.390 ==> default: Mounting SSHFS shared folder... 00:01:38.391 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:01:38.391 ==> default: Checking Mount.. 00:01:40.922 ==> default: Checking Mount.. 00:01:40.922 ==> default: Folder Successfully Mounted! 00:01:40.922 ==> default: Running provisioner: file... 00:01:41.181 default: ~/.gitconfig => .gitconfig 00:01:41.440 00:01:41.440 SUCCESS! 00:01:41.440 00:01:41.440 cd to /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:01:41.440 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:41.440 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt" to destroy all trace of vm. 00:01:41.440 00:01:41.452 [Pipeline] } 00:01:41.474 [Pipeline] // stage 00:01:41.484 [Pipeline] dir 00:01:41.485 Running in /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt 00:01:41.487 [Pipeline] { 00:01:41.502 [Pipeline] catchError 00:01:41.504 [Pipeline] { 00:01:41.520 [Pipeline] sh 00:01:41.803 + vagrant ssh-config --host vagrant 00:01:41.803 + sed -ne /^Host/,$p 00:01:41.803 + tee ssh_conf 00:01:45.989 Host vagrant 00:01:45.989 HostName 192.168.121.164 00:01:45.989 User vagrant 00:01:45.989 Port 22 00:01:45.989 UserKnownHostsFile /dev/null 00:01:45.989 StrictHostKeyChecking no 00:01:45.989 PasswordAuthentication no 00:01:45.989 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:01:45.989 IdentitiesOnly yes 00:01:45.989 LogLevel FATAL 00:01:45.989 ForwardAgent yes 00:01:45.989 ForwardX11 yes 00:01:45.989 00:01:46.004 [Pipeline] withEnv 00:01:46.006 [Pipeline] { 00:01:46.022 [Pipeline] sh 00:01:46.301 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:46.301 source /etc/os-release 00:01:46.301 [[ -e /image.version ]] && img=$(< /image.version) 00:01:46.301 # Minimal, systemd-like check. 00:01:46.301 if [[ -e /.dockerenv ]]; then 00:01:46.301 # Clear garbage from the node's name: 00:01:46.301 # agt-er_autotest_547-896 -> autotest_547-896 00:01:46.301 # $HOSTNAME is the actual container id 00:01:46.301 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:46.301 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:46.301 # We can assume this is a mount from a host where container is running, 00:01:46.301 # so fetch its hostname to easily identify the target swarm worker. 00:01:46.301 container="$(< /etc/hostname) ($agent)" 00:01:46.301 else 00:01:46.301 # Fallback 00:01:46.301 container=$agent 00:01:46.301 fi 00:01:46.301 fi 00:01:46.301 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:46.301 00:01:46.877 [Pipeline] } 00:01:46.897 [Pipeline] // withEnv 00:01:46.905 [Pipeline] setCustomBuildProperty 00:01:46.947 [Pipeline] stage 00:01:46.949 [Pipeline] { (Tests) 00:01:46.967 [Pipeline] sh 00:01:47.245 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:47.819 [Pipeline] sh 00:01:48.094 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:48.672 [Pipeline] timeout 00:01:48.672 Timeout set to expire in 1 hr 30 min 00:01:48.673 [Pipeline] { 00:01:48.685 [Pipeline] sh 00:01:48.958 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:49.893 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:49.905 [Pipeline] sh 00:01:50.186 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:50.752 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:50.767 [Pipeline] sh 00:01:51.084 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:51.667 [Pipeline] sh 00:01:51.946 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu20-vg-autotest ./autoruner.sh spdk_repo 00:01:52.513 ++ readlink -f spdk_repo 00:01:52.513 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:52.513 + [[ -n /home/vagrant/spdk_repo ]] 00:01:52.513 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:52.513 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:52.513 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:52.513 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:52.513 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:52.513 + [[ ubuntu20-vg-autotest == pkgdep-* ]] 00:01:52.513 + cd /home/vagrant/spdk_repo 00:01:52.513 + source /etc/os-release 00:01:52.513 ++ NAME=Ubuntu 00:01:52.513 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:01:52.513 ++ ID=ubuntu 00:01:52.513 ++ ID_LIKE=debian 00:01:52.513 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:01:52.513 ++ VERSION_ID=20.04 00:01:52.513 ++ HOME_URL=https://www.ubuntu.com/ 00:01:52.513 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:52.513 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:52.513 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:52.513 ++ VERSION_CODENAME=focal 00:01:52.513 ++ UBUNTU_CODENAME=focal 00:01:52.513 + uname -a 00:01:52.513 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:52.513 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:52.513 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:52.771 Hugepages 00:01:52.771 node hugesize free / total 00:01:52.771 node0 1048576kB 0 / 0 00:01:52.771 node0 2048kB 0 / 0 00:01:52.771 00:01:52.771 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:52.771 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:52.771 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:52.771 + rm -f /tmp/spdk-ld-path 00:01:52.771 + source autorun-spdk.conf 00:01:52.771 ++ SPDK_TEST_UNITTEST=1 00:01:52.771 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.771 ++ SPDK_TEST_NVME=1 00:01:52.771 ++ SPDK_TEST_BLOCKDEV=1 00:01:52.771 ++ SPDK_RUN_ASAN=1 00:01:52.771 ++ SPDK_RUN_UBSAN=1 00:01:52.771 ++ SPDK_TEST_RAID5=1 00:01:52.771 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.771 ++ RUN_NIGHTLY=1 00:01:52.771 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:52.771 + [[ -n '' ]] 00:01:52.771 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:52.771 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:52.771 + for M in /var/spdk/build-*-manifest.txt 00:01:52.771 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:52.771 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.771 + for M in /var/spdk/build-*-manifest.txt 00:01:52.771 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:52.771 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.771 ++ uname 00:01:52.771 + [[ Linux == \L\i\n\u\x ]] 00:01:52.771 + sudo dmesg -T 00:01:52.771 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:52.771 + sudo dmesg --clear 00:01:52.771 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:52.771 + dmesg_pid=2341 00:01:52.771 + sudo dmesg -Tw 00:01:52.771 + [[ Ubuntu == FreeBSD ]] 00:01:52.771 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.771 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.771 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:52.771 + [[ -x /usr/src/fio-static/fio ]] 00:01:52.771 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:52.771 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:52.771 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:52.771 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:52.771 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:52.771 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:52.771 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:52.771 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.771 Test configuration: 00:01:52.771 SPDK_TEST_UNITTEST=1 00:01:52.771 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.771 SPDK_TEST_NVME=1 00:01:52.771 SPDK_TEST_BLOCKDEV=1 00:01:52.771 SPDK_RUN_ASAN=1 00:01:52.771 SPDK_RUN_UBSAN=1 00:01:52.771 SPDK_TEST_RAID5=1 00:01:52.771 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:53.030 RUN_NIGHTLY=1 16:17:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:53.030 16:17:28 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.030 16:17:28 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.030 16:17:28 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.030 16:17:28 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:53.030 16:17:28 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:53.030 16:17:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:53.030 16:17:28 -- paths/export.sh@5 -- $ export PATH 00:01:53.030 16:17:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:53.030 16:17:28 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:53.030 16:17:28 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:53.030 16:17:28 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720714648.XXXXXX 00:01:53.030 16:17:28 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720714648.T79vse 00:01:53.030 16:17:28 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:53.030 16:17:28 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:53.030 16:17:28 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:53.030 16:17:28 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:53.030 16:17:28 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.030 16:17:28 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:53.030 16:17:28 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:53.030 16:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.030 16:17:28 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:53.030 16:17:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:53.030 16:17:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:53.030 16:17:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:53.030 16:17:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:53.030 Thu Jul 11 16:17:28 UTC 2024 00:01:53.030 16:17:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:53.030 LTS-59-g4b94202c6 00:01:53.030 16:17:28 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:53.030 16:17:28 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:53.030 16:17:28 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:53.030 16:17:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:53.030 16:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.030 ************************************ 00:01:53.030 START TEST asan 00:01:53.030 ************************************ 00:01:53.030 using asan 00:01:53.030 16:17:28 -- common/autotest_common.sh@1104 -- $ echo 'using asan' 00:01:53.030 00:01:53.030 real 0m0.000s 00:01:53.030 user 0m0.000s 00:01:53.030 sys 0m0.000s 00:01:53.030 16:17:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:53.030 16:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.030 ************************************ 00:01:53.030 END TEST asan 00:01:53.030 ************************************ 00:01:53.030 16:17:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:53.030 16:17:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:53.030 16:17:28 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:53.030 16:17:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:53.030 16:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.030 ************************************ 00:01:53.030 START TEST ubsan 00:01:53.030 ************************************ 00:01:53.030 using ubsan 00:01:53.030 16:17:28 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:53.030 00:01:53.030 real 0m0.000s 00:01:53.030 user 0m0.000s 00:01:53.030 sys 0m0.000s 00:01:53.030 16:17:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:53.030 16:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.030 ************************************ 00:01:53.030 END TEST ubsan 00:01:53.030 ************************************ 00:01:53.030 16:17:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:53.030 16:17:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:53.030 16:17:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:53.030 16:17:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:53.030 16:17:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:53.030 16:17:28 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:53.030 16:17:28 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:53.030 16:17:28 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:01:53.030 16:17:28 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:53.030 16:17:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:53.030 16:17:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.030 ************************************ 00:01:53.030 START TEST unittest_build 00:01:53.030 ************************************ 00:01:53.030 16:17:28 -- common/autotest_common.sh@1104 -- $ _unittest_build 00:01:53.030 16:17:28 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:53.289 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:53.289 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:53.547 Using 'verbs' RDMA provider 00:02:08.998 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:21.198 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:21.198 Creating mk/config.mk...done. 00:02:21.198 Creating mk/cc.flags.mk...done. 00:02:21.198 Type 'make' to build. 00:02:21.198 16:17:57 -- common/autobuild_common.sh@403 -- $ make -j10 00:02:21.198 make[1]: Nothing to be done for 'all'. 00:02:22.597 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.855 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.856 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.856 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:22.856 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.114 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.114 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.114 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.372 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.372 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.372 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.372 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.372 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.631 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.631 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.631 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.631 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.631 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.631 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.890 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.890 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.890 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.890 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.890 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.890 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:23.890 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.148 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.148 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.148 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.148 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.148 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.148 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.407 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.407 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.407 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.407 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.407 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.407 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.407 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.407 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.407 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.665 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.665 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.665 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.665 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.665 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.665 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.923 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.923 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.923 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.923 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.923 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.923 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.923 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.923 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:24.923 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.180 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.181 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.181 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.181 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.438 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.438 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.438 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.696 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.696 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.696 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.696 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.696 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.696 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.696 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.696 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.696 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.954 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.954 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.954 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.954 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.954 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.954 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.954 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:25.954 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.212 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.212 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.212 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.212 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.212 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.212 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.212 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.212 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.212 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.469 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.469 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.469 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.469 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.469 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.469 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.728 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.728 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.728 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.728 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.728 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.728 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.728 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.985 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.985 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.985 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.985 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:26.985 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.243 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.243 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.243 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.500 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.500 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.500 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.500 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.500 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.501 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.758 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.758 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.758 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.758 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.758 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.758 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.758 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.758 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:27.758 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.016 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.016 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.016 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.016 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.016 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.016 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.273 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.273 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.273 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.273 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.532 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.532 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.532 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.532 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.790 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.790 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.790 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.790 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.790 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.790 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.790 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:28.790 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.047 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.047 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.047 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.047 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.047 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.047 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.047 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.047 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.304 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.304 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.304 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.304 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.304 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.304 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.304 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.304 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.304 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.304 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.562 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.562 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.562 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.562 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.562 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.562 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.562 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.562 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.820 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.820 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.820 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.820 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.078 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.078 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.078 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.078 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.335 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.335 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.335 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.335 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.335 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.593 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.593 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.593 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.593 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.593 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.593 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.593 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.593 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.593 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.850 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.850 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.850 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.850 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.850 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.850 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.850 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.850 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.108 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.108 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.108 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.108 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.108 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.108 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.108 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.108 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.400 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.400 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.400 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.400 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.400 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.667 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.667 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.667 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.667 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.667 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.935 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.935 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.192 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.192 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.192 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.192 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.192 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.192 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.192 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.450 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.450 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.450 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.450 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.450 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.450 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.707 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.707 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.707 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.707 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.221 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.477 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.477 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.734 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.734 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.990 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.246 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.246 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.246 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.502 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.502 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.502 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.812 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.812 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.812 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.812 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.375 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.375 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.375 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.375 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.375 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.632 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.632 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.632 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.632 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.632 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.890 The Meson build system 00:02:35.890 Version: 1.4.0 00:02:35.890 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:35.891 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:35.891 Build type: native build 00:02:35.891 Program cat found: YES (/usr/bin/cat) 00:02:35.891 Project name: DPDK 00:02:35.891 Project version: 23.11.0 00:02:35.891 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:02:35.891 C linker for the host machine: cc ld.bfd 2.34 00:02:35.891 Host machine cpu family: x86_64 00:02:35.891 Host machine cpu: x86_64 00:02:35.891 Message: ## Building in Developer Mode ## 00:02:35.891 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:35.891 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:35.891 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:35.891 Program python3 found: YES (/usr/bin/python3) 00:02:35.891 Program cat found: YES (/usr/bin/cat) 00:02:35.891 Compiler for C supports arguments -march=native: YES 00:02:35.891 Checking for size of "void *" : 8 00:02:35.891 Checking for size of "void *" : 8 (cached) 00:02:35.891 Library m found: YES 00:02:35.891 Library numa found: YES 00:02:35.891 Has header "numaif.h" : YES 00:02:35.891 Library fdt found: NO 00:02:35.891 Library execinfo found: NO 00:02:35.891 Has header "execinfo.h" : YES 00:02:35.891 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:02:35.891 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:35.891 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:35.891 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:35.891 Run-time dependency openssl found: YES 1.1.1f 00:02:35.891 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:35.891 Library pcap found: NO 00:02:35.891 Compiler for C supports arguments -Wcast-qual: YES 00:02:35.891 Compiler for C supports arguments -Wdeprecated: YES 00:02:35.891 Compiler for C supports arguments -Wformat: YES 00:02:35.891 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:35.891 Compiler for C supports arguments -Wformat-security: YES 00:02:35.891 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:35.891 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:35.891 Compiler for C supports arguments -Wnested-externs: YES 00:02:35.891 Compiler for C supports arguments -Wold-style-definition: YES 00:02:35.891 Compiler for C supports arguments -Wpointer-arith: YES 00:02:35.891 Compiler for C supports arguments -Wsign-compare: YES 00:02:35.891 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:35.891 Compiler for C supports arguments -Wundef: YES 00:02:35.891 Compiler for C supports arguments -Wwrite-strings: YES 00:02:35.891 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:35.891 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:35.891 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:35.891 Program objdump found: YES (/usr/bin/objdump) 00:02:35.891 Compiler for C supports arguments -mavx512f: YES 00:02:35.891 Checking if "AVX512 checking" compiles: YES 00:02:35.891 Fetching value of define "__SSE4_2__" : 1 00:02:35.891 Fetching value of define "__AES__" : 1 00:02:35.891 Fetching value of define "__AVX__" : 1 00:02:35.891 Fetching value of define "__AVX2__" : 1 00:02:35.891 Fetching value of define "__AVX512BW__" : (undefined) 00:02:35.891 Fetching value of define "__AVX512CD__" : (undefined) 00:02:35.891 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:35.891 Fetching value of define "__AVX512F__" : (undefined) 00:02:35.891 Fetching value of define "__AVX512VL__" : (undefined) 00:02:35.891 Fetching value of define "__PCLMUL__" : 1 00:02:35.891 Fetching value of define "__RDRND__" : 1 00:02:35.891 Fetching value of define "__RDSEED__" : 1 00:02:35.891 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:35.891 Fetching value of define "__znver1__" : (undefined) 00:02:35.891 Fetching value of define "__znver2__" : (undefined) 00:02:35.891 Fetching value of define "__znver3__" : (undefined) 00:02:35.891 Fetching value of define "__znver4__" : (undefined) 00:02:35.891 Library asan found: YES 00:02:35.891 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:35.891 Message: lib/log: Defining dependency "log" 00:02:35.891 Message: lib/kvargs: Defining dependency "kvargs" 00:02:35.891 Message: lib/telemetry: Defining dependency "telemetry" 00:02:35.891 Library rt found: YES 00:02:35.891 Checking for function "getentropy" : NO 00:02:35.891 Message: lib/eal: Defining dependency "eal" 00:02:35.891 Message: lib/ring: Defining dependency "ring" 00:02:35.891 Message: lib/rcu: Defining dependency "rcu" 00:02:35.891 Message: lib/mempool: Defining dependency "mempool" 00:02:35.891 Message: lib/mbuf: Defining dependency "mbuf" 00:02:35.891 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:35.891 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:35.891 Compiler for C supports arguments -mpclmul: YES 00:02:35.891 Compiler for C supports arguments -maes: YES 00:02:35.891 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:35.891 Compiler for C supports arguments -mavx512bw: YES 00:02:35.891 Compiler for C supports arguments -mavx512dq: YES 00:02:35.891 Compiler for C supports arguments -mavx512vl: YES 00:02:35.891 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:35.891 Compiler for C supports arguments -mavx2: YES 00:02:35.891 Compiler for C supports arguments -mavx: YES 00:02:35.891 Message: lib/net: Defining dependency "net" 00:02:35.891 Message: lib/meter: Defining dependency "meter" 00:02:35.891 Message: lib/ethdev: Defining dependency "ethdev" 00:02:35.891 Message: lib/pci: Defining dependency "pci" 00:02:35.891 Message: lib/cmdline: Defining dependency "cmdline" 00:02:35.891 Message: lib/hash: Defining dependency "hash" 00:02:35.891 Message: lib/timer: Defining dependency "timer" 00:02:35.891 Message: lib/compressdev: Defining dependency "compressdev" 00:02:35.891 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:35.891 Message: lib/dmadev: Defining dependency "dmadev" 00:02:35.891 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:35.891 Message: lib/power: Defining dependency "power" 00:02:35.891 Message: lib/reorder: Defining dependency "reorder" 00:02:35.891 Message: lib/security: Defining dependency "security" 00:02:35.891 Has header "linux/userfaultfd.h" : YES 00:02:35.891 Has header "linux/vduse.h" : NO 00:02:35.891 Message: lib/vhost: Defining dependency "vhost" 00:02:35.891 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:35.891 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:35.891 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:35.891 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:35.891 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:35.891 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:35.891 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:35.891 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:35.891 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:35.891 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:35.891 Program doxygen found: YES (/usr/bin/doxygen) 00:02:35.891 Configuring doxy-api-html.conf using configuration 00:02:35.891 Configuring doxy-api-man.conf using configuration 00:02:35.891 Program mandb found: YES (/usr/bin/mandb) 00:02:35.891 Program sphinx-build found: NO 00:02:35.891 Configuring rte_build_config.h using configuration 00:02:35.891 Message: 00:02:35.891 ================= 00:02:35.891 Applications Enabled 00:02:35.891 ================= 00:02:35.891 00:02:35.891 apps: 00:02:35.891 00:02:35.891 00:02:35.891 Message: 00:02:35.891 ================= 00:02:35.891 Libraries Enabled 00:02:35.891 ================= 00:02:35.891 00:02:35.891 libs: 00:02:35.891 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:35.891 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:35.891 cryptodev, dmadev, power, reorder, security, vhost, 00:02:35.891 00:02:35.891 Message: 00:02:35.891 =============== 00:02:35.891 Drivers Enabled 00:02:35.891 =============== 00:02:35.891 00:02:35.891 common: 00:02:35.891 00:02:35.891 bus: 00:02:35.891 pci, vdev, 00:02:35.891 mempool: 00:02:35.891 ring, 00:02:35.891 dma: 00:02:35.891 00:02:35.891 net: 00:02:35.891 00:02:35.891 crypto: 00:02:35.891 00:02:35.891 compress: 00:02:35.891 00:02:35.891 vdpa: 00:02:35.891 00:02:35.891 00:02:35.891 Message: 00:02:35.891 ================= 00:02:35.891 Content Skipped 00:02:35.891 ================= 00:02:35.891 00:02:35.891 apps: 00:02:35.891 dumpcap: explicitly disabled via build config 00:02:35.891 graph: explicitly disabled via build config 00:02:35.891 pdump: explicitly disabled via build config 00:02:35.891 proc-info: explicitly disabled via build config 00:02:35.891 test-acl: explicitly disabled via build config 00:02:35.891 test-bbdev: explicitly disabled via build config 00:02:35.891 test-cmdline: explicitly disabled via build config 00:02:35.891 test-compress-perf: explicitly disabled via build config 00:02:35.891 test-crypto-perf: explicitly disabled via build config 00:02:35.891 test-dma-perf: explicitly disabled via build config 00:02:35.891 test-eventdev: explicitly disabled via build config 00:02:35.891 test-fib: explicitly disabled via build config 00:02:35.891 test-flow-perf: explicitly disabled via build config 00:02:35.891 test-gpudev: explicitly disabled via build config 00:02:35.891 test-mldev: explicitly disabled via build config 00:02:35.891 test-pipeline: explicitly disabled via build config 00:02:35.891 test-pmd: explicitly disabled via build config 00:02:35.891 test-regex: explicitly disabled via build config 00:02:35.891 test-sad: explicitly disabled via build config 00:02:35.891 test-security-perf: explicitly disabled via build config 00:02:35.891 00:02:35.891 libs: 00:02:35.891 metrics: explicitly disabled via build config 00:02:35.891 acl: explicitly disabled via build config 00:02:35.891 bbdev: explicitly disabled via build config 00:02:35.891 bitratestats: explicitly disabled via build config 00:02:35.891 bpf: explicitly disabled via build config 00:02:35.891 cfgfile: explicitly disabled via build config 00:02:35.891 distributor: explicitly disabled via build config 00:02:35.892 efd: explicitly disabled via build config 00:02:35.892 eventdev: explicitly disabled via build config 00:02:35.892 dispatcher: explicitly disabled via build config 00:02:35.892 gpudev: explicitly disabled via build config 00:02:35.892 gro: explicitly disabled via build config 00:02:35.892 gso: explicitly disabled via build config 00:02:35.892 ip_frag: explicitly disabled via build config 00:02:35.892 jobstats: explicitly disabled via build config 00:02:35.892 latencystats: explicitly disabled via build config 00:02:35.892 lpm: explicitly disabled via build config 00:02:35.892 member: explicitly disabled via build config 00:02:35.892 pcapng: explicitly disabled via build config 00:02:35.892 rawdev: explicitly disabled via build config 00:02:35.892 regexdev: explicitly disabled via build config 00:02:35.892 mldev: explicitly disabled via build config 00:02:35.892 rib: explicitly disabled via build config 00:02:35.892 sched: explicitly disabled via build config 00:02:35.892 stack: explicitly disabled via build config 00:02:35.892 ipsec: explicitly disabled via build config 00:02:35.892 pdcp: explicitly disabled via build config 00:02:35.892 fib: explicitly disabled via build config 00:02:35.892 port: explicitly disabled via build config 00:02:35.892 pdump: explicitly disabled via build config 00:02:35.892 table: explicitly disabled via build config 00:02:35.892 pipeline: explicitly disabled via build config 00:02:35.892 graph: explicitly disabled via build config 00:02:35.892 node: explicitly disabled via build config 00:02:35.892 00:02:35.892 drivers: 00:02:35.892 common/cpt: not in enabled drivers build config 00:02:35.892 common/dpaax: not in enabled drivers build config 00:02:35.892 common/iavf: not in enabled drivers build config 00:02:35.892 common/idpf: not in enabled drivers build config 00:02:35.892 common/mvep: not in enabled drivers build config 00:02:35.892 common/octeontx: not in enabled drivers build config 00:02:35.892 bus/auxiliary: not in enabled drivers build config 00:02:35.892 bus/cdx: not in enabled drivers build config 00:02:35.892 bus/dpaa: not in enabled drivers build config 00:02:35.892 bus/fslmc: not in enabled drivers build config 00:02:35.892 bus/ifpga: not in enabled drivers build config 00:02:35.892 bus/platform: not in enabled drivers build config 00:02:35.892 bus/vmbus: not in enabled drivers build config 00:02:35.892 common/cnxk: not in enabled drivers build config 00:02:35.892 common/mlx5: not in enabled drivers build config 00:02:35.892 common/nfp: not in enabled drivers build config 00:02:35.892 common/qat: not in enabled drivers build config 00:02:35.892 common/sfc_efx: not in enabled drivers build config 00:02:35.892 mempool/bucket: not in enabled drivers build config 00:02:35.892 mempool/cnxk: not in enabled drivers build config 00:02:35.892 mempool/dpaa: not in enabled drivers build config 00:02:35.892 mempool/dpaa2: not in enabled drivers build config 00:02:35.892 mempool/octeontx: not in enabled drivers build config 00:02:35.892 mempool/stack: not in enabled drivers build config 00:02:35.892 dma/cnxk: not in enabled drivers build config 00:02:35.892 dma/dpaa: not in enabled drivers build config 00:02:35.892 dma/dpaa2: not in enabled drivers build config 00:02:35.892 dma/hisilicon: not in enabled drivers build config 00:02:35.892 dma/idxd: not in enabled drivers build config 00:02:35.892 dma/ioat: not in enabled drivers build config 00:02:35.892 dma/skeleton: not in enabled drivers build config 00:02:35.892 net/af_packet: not in enabled drivers build config 00:02:35.892 net/af_xdp: not in enabled drivers build config 00:02:35.892 net/ark: not in enabled drivers build config 00:02:35.892 net/atlantic: not in enabled drivers build config 00:02:35.892 net/avp: not in enabled drivers build config 00:02:35.892 net/axgbe: not in enabled drivers build config 00:02:35.892 net/bnx2x: not in enabled drivers build config 00:02:35.892 net/bnxt: not in enabled drivers build config 00:02:35.892 net/bonding: not in enabled drivers build config 00:02:35.892 net/cnxk: not in enabled drivers build config 00:02:35.892 net/cpfl: not in enabled drivers build config 00:02:35.892 net/cxgbe: not in enabled drivers build config 00:02:35.892 net/dpaa: not in enabled drivers build config 00:02:35.892 net/dpaa2: not in enabled drivers build config 00:02:35.892 net/e1000: not in enabled drivers build config 00:02:35.892 net/ena: not in enabled drivers build config 00:02:35.892 net/enetc: not in enabled drivers build config 00:02:35.892 net/enetfec: not in enabled drivers build config 00:02:35.892 net/enic: not in enabled drivers build config 00:02:35.892 net/failsafe: not in enabled drivers build config 00:02:35.892 net/fm10k: not in enabled drivers build config 00:02:35.892 net/gve: not in enabled drivers build config 00:02:35.892 net/hinic: not in enabled drivers build config 00:02:35.892 net/hns3: not in enabled drivers build config 00:02:35.892 net/i40e: not in enabled drivers build config 00:02:35.892 net/iavf: not in enabled drivers build config 00:02:35.892 net/ice: not in enabled drivers build config 00:02:35.892 net/idpf: not in enabled drivers build config 00:02:35.892 net/igc: not in enabled drivers build config 00:02:35.892 net/ionic: not in enabled drivers build config 00:02:35.892 net/ipn3ke: not in enabled drivers build config 00:02:35.892 net/ixgbe: not in enabled drivers build config 00:02:35.892 net/mana: not in enabled drivers build config 00:02:35.892 net/memif: not in enabled drivers build config 00:02:35.892 net/mlx4: not in enabled drivers build config 00:02:35.892 net/mlx5: not in enabled drivers build config 00:02:35.892 net/mvneta: not in enabled drivers build config 00:02:35.892 net/mvpp2: not in enabled drivers build config 00:02:35.892 net/netvsc: not in enabled drivers build config 00:02:35.892 net/nfb: not in enabled drivers build config 00:02:35.892 net/nfp: not in enabled drivers build config 00:02:35.892 net/ngbe: not in enabled drivers build config 00:02:35.892 net/null: not in enabled drivers build config 00:02:35.892 net/octeontx: not in enabled drivers build config 00:02:35.892 net/octeon_ep: not in enabled drivers build config 00:02:35.892 net/pcap: not in enabled drivers build config 00:02:35.892 net/pfe: not in enabled drivers build config 00:02:35.892 net/qede: not in enabled drivers build config 00:02:35.892 net/ring: not in enabled drivers build config 00:02:35.892 net/sfc: not in enabled drivers build config 00:02:35.892 net/softnic: not in enabled drivers build config 00:02:35.892 net/tap: not in enabled drivers build config 00:02:35.892 net/thunderx: not in enabled drivers build config 00:02:35.892 net/txgbe: not in enabled drivers build config 00:02:35.892 net/vdev_netvsc: not in enabled drivers build config 00:02:35.892 net/vhost: not in enabled drivers build config 00:02:35.892 net/virtio: not in enabled drivers build config 00:02:35.892 net/vmxnet3: not in enabled drivers build config 00:02:35.892 raw/*: missing internal dependency, "rawdev" 00:02:35.892 crypto/armv8: not in enabled drivers build config 00:02:35.892 crypto/bcmfs: not in enabled drivers build config 00:02:35.892 crypto/caam_jr: not in enabled drivers build config 00:02:35.892 crypto/ccp: not in enabled drivers build config 00:02:35.892 crypto/cnxk: not in enabled drivers build config 00:02:35.892 crypto/dpaa_sec: not in enabled drivers build config 00:02:35.892 crypto/dpaa2_sec: not in enabled drivers build config 00:02:35.892 crypto/ipsec_mb: not in enabled drivers build config 00:02:35.892 crypto/mlx5: not in enabled drivers build config 00:02:35.892 crypto/mvsam: not in enabled drivers build config 00:02:35.892 crypto/nitrox: not in enabled drivers build config 00:02:35.892 crypto/null: not in enabled drivers build config 00:02:35.892 crypto/octeontx: not in enabled drivers build config 00:02:35.892 crypto/openssl: not in enabled drivers build config 00:02:35.892 crypto/scheduler: not in enabled drivers build config 00:02:35.892 crypto/uadk: not in enabled drivers build config 00:02:35.892 crypto/virtio: not in enabled drivers build config 00:02:35.892 compress/isal: not in enabled drivers build config 00:02:35.892 compress/mlx5: not in enabled drivers build config 00:02:35.892 compress/octeontx: not in enabled drivers build config 00:02:35.892 compress/zlib: not in enabled drivers build config 00:02:35.892 regex/*: missing internal dependency, "regexdev" 00:02:35.892 ml/*: missing internal dependency, "mldev" 00:02:35.892 vdpa/ifc: not in enabled drivers build config 00:02:35.892 vdpa/mlx5: not in enabled drivers build config 00:02:35.892 vdpa/nfp: not in enabled drivers build config 00:02:35.892 vdpa/sfc: not in enabled drivers build config 00:02:35.892 event/*: missing internal dependency, "eventdev" 00:02:35.892 baseband/*: missing internal dependency, "bbdev" 00:02:35.892 gpu/*: missing internal dependency, "gpudev" 00:02:35.892 00:02:35.892 00:02:35.892 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.892 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.892 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.892 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.150 Build targets in project: 85 00:02:36.150 00:02:36.150 DPDK 23.11.0 00:02:36.150 00:02:36.150 User defined options 00:02:36.150 buildtype : debug 00:02:36.150 default_library : static 00:02:36.150 libdir : lib 00:02:36.150 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:36.150 b_sanitize : address 00:02:36.150 c_args : -fPIC -Werror 00:02:36.150 c_link_args : 00:02:36.150 cpu_instruction_set: native 00:02:36.150 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:02:36.150 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:02:36.150 enable_docs : false 00:02:36.150 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:36.150 enable_kmods : false 00:02:36.150 tests : false 00:02:36.150 00:02:36.150 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.150 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.150 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.150 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.413 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.673 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.673 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.673 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.673 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.673 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.673 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:36.673 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:36.673 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:36.674 [3/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:36.930 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:36.930 [5/264] Linking static target lib/librte_kvargs.a 00:02:36.931 [6/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:36.931 [7/264] Linking static target lib/librte_log.a 00:02:36.931 [8/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:36.931 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:36.931 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:36.931 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:37.188 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.188 [13/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:37.188 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:37.188 [15/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.188 [16/264] Linking static target lib/librte_telemetry.a 00:02:37.188 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:37.446 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:37.446 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:37.446 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:37.446 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:37.446 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.446 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:37.446 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:37.446 [24/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.446 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.446 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:37.703 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:37.703 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.704 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:37.704 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:37.704 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:37.961 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.961 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.961 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:37.961 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.961 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:37.961 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.961 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:37.961 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:38.219 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.219 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:38.219 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.219 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:38.219 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:38.219 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:38.219 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.219 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.219 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.219 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:38.487 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:38.487 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.487 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.487 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:38.487 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:38.487 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:38.487 [47/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.487 [48/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.487 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.487 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.488 [50/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.488 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:38.488 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.762 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:38.762 [53/264] Linking target lib/librte_log.so.24.0 00:02:38.762 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:38.762 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.762 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:38.762 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:38.762 [57/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:38.762 [58/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:38.762 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:38.762 [60/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:38.762 [61/264] Linking target lib/librte_kvargs.so.24.0 00:02:38.762 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:38.762 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:38.762 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:38.762 [65/264] Linking target lib/librte_telemetry.so.24.0 00:02:39.022 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:39.022 [67/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:39.022 [68/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:39.022 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:39.022 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:39.022 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:39.022 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:39.022 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:39.022 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:39.022 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:39.022 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:39.022 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:39.280 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:39.280 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:39.280 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:39.280 [81/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:39.280 [82/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:39.280 [83/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:39.538 [84/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:39.538 [85/264] Linking static target lib/librte_ring.a 00:02:39.538 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.538 [87/264] Linking static target lib/librte_eal.a 00:02:39.538 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:39.538 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:39.538 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:39.796 [91/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:39.796 [92/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:39.796 [93/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:39.796 [94/264] Linking static target lib/librte_mempool.a 00:02:39.796 [95/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:39.797 [96/264] Linking static target lib/librte_rcu.a 00:02:39.797 [97/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.055 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.055 [99/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:40.055 [100/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:40.055 [101/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.055 [102/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:40.055 [103/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:40.055 [104/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:40.313 [105/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:40.313 [106/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:40.313 [107/264] Linking static target lib/librte_net.a 00:02:40.313 [108/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:40.313 [109/264] Linking static target lib/librte_meter.a 00:02:40.313 [110/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.313 [111/264] Linking static target lib/librte_mbuf.a 00:02:40.313 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:40.313 [113/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.571 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:40.571 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:40.571 [116/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.571 [117/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.571 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:40.828 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.828 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.828 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:41.086 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:41.086 [123/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.086 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:41.086 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:41.086 [126/264] Linking static target lib/librte_pci.a 00:02:41.086 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:41.344 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:41.344 [129/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:41.344 [130/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:41.344 [131/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.344 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:41.344 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:41.344 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:41.344 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:41.344 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:41.344 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:41.344 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:41.344 [139/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:41.344 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:41.344 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:41.603 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:41.603 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:41.603 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:41.603 [145/264] Linking static target lib/librte_cmdline.a 00:02:41.861 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:41.861 [147/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:41.861 [148/264] Linking static target lib/librte_timer.a 00:02:41.861 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:41.861 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:41.861 [151/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:41.861 [152/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:42.119 [153/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.119 [154/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:42.377 [155/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:42.377 [156/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:42.377 [157/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:42.377 [158/264] Linking static target lib/librte_compressdev.a 00:02:42.377 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:42.377 [160/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:42.377 [161/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:42.377 [162/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:42.377 [163/264] Linking static target lib/librte_hash.a 00:02:42.377 [164/264] Linking static target lib/librte_dmadev.a 00:02:42.635 [165/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.635 [166/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:42.635 [167/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:42.635 [168/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:42.635 [169/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:42.635 [170/264] Linking static target lib/librte_ethdev.a 00:02:42.893 [171/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.893 [172/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:42.893 [173/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.893 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:42.893 [175/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:43.151 [176/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:43.151 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:43.151 [178/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.151 [179/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:43.151 [180/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:43.151 [181/264] Linking static target lib/librte_power.a 00:02:43.408 [182/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:43.408 [183/264] Linking static target lib/librte_cryptodev.a 00:02:43.408 [184/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:43.408 [185/264] Linking static target lib/librte_reorder.a 00:02:43.408 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:43.408 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:43.666 [188/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:43.666 [189/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:43.666 [190/264] Linking static target lib/librte_security.a 00:02:43.666 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.924 [192/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:43.924 [193/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.924 [194/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:44.182 [195/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.182 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:44.182 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:44.440 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:44.440 [199/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:44.440 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:44.440 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:44.440 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.698 [203/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.698 [204/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.698 [205/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.698 [206/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.698 [207/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.698 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.956 [209/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.956 [210/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.956 [211/264] Linking static target drivers/librte_bus_vdev.a 00:02:44.956 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.956 [213/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.956 [214/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.956 [215/264] Linking static target drivers/librte_bus_pci.a 00:02:44.956 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.956 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.956 [218/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.215 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:45.215 [220/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.215 [221/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.215 [222/264] Linking static target drivers/librte_mempool_ring.a 00:02:45.473 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.864 [224/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.864 [225/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:46.864 [226/264] Linking target lib/librte_eal.so.24.0 00:02:46.864 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:47.122 [228/264] Linking target lib/librte_pci.so.24.0 00:02:47.122 [229/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:47.122 [230/264] Linking target lib/librte_ring.so.24.0 00:02:47.122 [231/264] Linking target lib/librte_meter.so.24.0 00:02:47.122 [232/264] Linking target lib/librte_timer.so.24.0 00:02:47.122 [233/264] Linking target lib/librte_dmadev.so.24.0 00:02:47.122 [234/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:47.122 [235/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:47.122 [236/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:47.122 [237/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:47.122 [238/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:47.122 [239/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:47.122 [240/264] Linking target lib/librte_mempool.so.24.0 00:02:47.122 [241/264] Linking target lib/librte_rcu.so.24.0 00:02:47.379 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:47.379 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:47.379 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:47.379 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:47.379 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:47.379 [247/264] Linking target lib/librte_net.so.24.0 00:02:47.379 [248/264] Linking target lib/librte_compressdev.so.24.0 00:02:47.379 [249/264] Linking target lib/librte_reorder.so.24.0 00:02:47.379 [250/264] Linking target lib/librte_cryptodev.so.24.0 00:02:47.637 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:47.637 [252/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:47.637 [253/264] Linking target lib/librte_hash.so.24.0 00:02:47.637 [254/264] Linking target lib/librte_cmdline.so.24.0 00:02:47.637 [255/264] Linking target lib/librte_security.so.24.0 00:02:47.637 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:48.569 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.569 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:48.569 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:48.569 [260/264] Linking target lib/librte_power.so.24.0 00:02:51.099 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:51.099 [262/264] Linking static target lib/librte_vhost.a 00:02:52.471 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.471 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:52.471 INFO: autodetecting backend as ninja 00:02:52.471 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:53.404 CC lib/ut_mock/mock.o 00:02:53.404 CC lib/ut/ut.o 00:02:53.404 CC lib/log/log.o 00:02:53.404 CC lib/log/log_flags.o 00:02:53.404 CC lib/log/log_deprecated.o 00:02:53.661 LIB libspdk_ut_mock.a 00:02:53.661 LIB libspdk_log.a 00:02:53.661 LIB libspdk_ut.a 00:02:53.918 CC lib/dma/dma.o 00:02:53.918 CC lib/util/base64.o 00:02:53.918 CC lib/util/bit_array.o 00:02:53.918 CC lib/util/cpuset.o 00:02:53.918 CC lib/util/crc16.o 00:02:53.918 CC lib/util/crc32.o 00:02:53.918 CXX lib/trace_parser/trace.o 00:02:53.918 CC lib/util/crc32c.o 00:02:53.918 CC lib/ioat/ioat.o 00:02:53.918 CC lib/vfio_user/host/vfio_user_pci.o 00:02:53.918 CC lib/util/crc32_ieee.o 00:02:53.918 CC lib/vfio_user/host/vfio_user.o 00:02:53.918 CC lib/util/crc64.o 00:02:53.918 CC lib/util/dif.o 00:02:53.918 CC lib/util/fd.o 00:02:53.918 LIB libspdk_dma.a 00:02:54.176 CC lib/util/file.o 00:02:54.176 CC lib/util/hexlify.o 00:02:54.176 CC lib/util/iov.o 00:02:54.176 CC lib/util/math.o 00:02:54.176 CC lib/util/pipe.o 00:02:54.176 CC lib/util/strerror_tls.o 00:02:54.176 LIB libspdk_ioat.a 00:02:54.176 LIB libspdk_vfio_user.a 00:02:54.176 CC lib/util/string.o 00:02:54.176 CC lib/util/uuid.o 00:02:54.176 CC lib/util/fd_group.o 00:02:54.176 CC lib/util/xor.o 00:02:54.176 CC lib/util/zipf.o 00:02:54.741 LIB libspdk_util.a 00:02:54.741 CC lib/json/json_parse.o 00:02:54.741 CC lib/json/json_util.o 00:02:54.741 CC lib/idxd/idxd.o 00:02:54.741 CC lib/idxd/idxd_user.o 00:02:54.741 CC lib/rdma/common.o 00:02:54.741 CC lib/json/json_write.o 00:02:54.741 CC lib/conf/conf.o 00:02:54.741 CC lib/vmd/vmd.o 00:02:54.999 CC lib/env_dpdk/env.o 00:02:54.999 LIB libspdk_trace_parser.a 00:02:54.999 CC lib/env_dpdk/memory.o 00:02:54.999 CC lib/env_dpdk/pci.o 00:02:54.999 CC lib/env_dpdk/init.o 00:02:55.257 CC lib/rdma/rdma_verbs.o 00:02:55.257 CC lib/env_dpdk/threads.o 00:02:55.257 LIB libspdk_json.a 00:02:55.257 LIB libspdk_conf.a 00:02:55.257 CC lib/env_dpdk/pci_ioat.o 00:02:55.257 CC lib/env_dpdk/pci_virtio.o 00:02:55.257 CC lib/jsonrpc/jsonrpc_server.o 00:02:55.257 CC lib/env_dpdk/pci_vmd.o 00:02:55.257 LIB libspdk_rdma.a 00:02:55.515 CC lib/env_dpdk/pci_event.o 00:02:55.515 CC lib/env_dpdk/pci_idxd.o 00:02:55.515 CC lib/vmd/led.o 00:02:55.515 CC lib/env_dpdk/sigbus_handler.o 00:02:55.515 CC lib/env_dpdk/pci_dpdk.o 00:02:55.515 LIB libspdk_idxd.a 00:02:55.515 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:55.515 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.515 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.515 CC lib/jsonrpc/jsonrpc_client.o 00:02:55.515 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:55.773 LIB libspdk_vmd.a 00:02:55.773 LIB libspdk_jsonrpc.a 00:02:56.031 CC lib/rpc/rpc.o 00:02:56.290 LIB libspdk_rpc.a 00:02:56.290 CC lib/trace/trace.o 00:02:56.290 CC lib/sock/sock_rpc.o 00:02:56.290 CC lib/notify/notify_rpc.o 00:02:56.290 CC lib/sock/sock.o 00:02:56.290 CC lib/notify/notify.o 00:02:56.290 CC lib/trace/trace_rpc.o 00:02:56.290 CC lib/trace/trace_flags.o 00:02:56.546 LIB libspdk_notify.a 00:02:56.546 LIB libspdk_trace.a 00:02:56.546 LIB libspdk_env_dpdk.a 00:02:56.803 LIB libspdk_sock.a 00:02:56.803 CC lib/thread/iobuf.o 00:02:56.803 CC lib/thread/thread.o 00:02:56.804 CC lib/nvme/nvme_ctrlr.o 00:02:56.804 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:56.804 CC lib/nvme/nvme_ns_cmd.o 00:02:56.804 CC lib/nvme/nvme_ns.o 00:02:56.804 CC lib/nvme/nvme_fabric.o 00:02:56.804 CC lib/nvme/nvme_pcie_common.o 00:02:56.804 CC lib/nvme/nvme_pcie.o 00:02:56.804 CC lib/nvme/nvme_qpair.o 00:02:57.061 CC lib/nvme/nvme.o 00:02:57.319 CC lib/nvme/nvme_quirks.o 00:02:57.577 CC lib/nvme/nvme_transport.o 00:02:57.577 CC lib/nvme/nvme_discovery.o 00:02:57.577 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.835 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.835 CC lib/nvme/nvme_tcp.o 00:02:57.835 CC lib/nvme/nvme_opal.o 00:02:57.835 CC lib/nvme/nvme_io_msg.o 00:02:58.093 CC lib/nvme/nvme_poll_group.o 00:02:58.093 CC lib/nvme/nvme_zns.o 00:02:58.093 CC lib/nvme/nvme_cuse.o 00:02:58.093 CC lib/nvme/nvme_vfio_user.o 00:02:58.093 CC lib/nvme/nvme_rdma.o 00:02:58.660 LIB libspdk_thread.a 00:02:58.660 CC lib/accel/accel.o 00:02:58.660 CC lib/blob/blobstore.o 00:02:58.660 CC lib/blob/request.o 00:02:58.660 CC lib/accel/accel_rpc.o 00:02:58.660 CC lib/init/json_config.o 00:02:58.660 CC lib/virtio/virtio.o 00:02:58.918 CC lib/virtio/virtio_vhost_user.o 00:02:58.918 CC lib/blob/zeroes.o 00:02:58.918 CC lib/init/subsystem.o 00:02:59.177 CC lib/virtio/virtio_vfio_user.o 00:02:59.177 CC lib/init/subsystem_rpc.o 00:02:59.177 CC lib/accel/accel_sw.o 00:02:59.177 CC lib/blob/blob_bs_dev.o 00:02:59.177 CC lib/virtio/virtio_pci.o 00:02:59.177 CC lib/init/rpc.o 00:02:59.435 LIB libspdk_init.a 00:02:59.435 CC lib/event/app.o 00:02:59.435 CC lib/event/log_rpc.o 00:02:59.435 CC lib/event/reactor.o 00:02:59.435 CC lib/event/app_rpc.o 00:02:59.435 CC lib/event/scheduler_static.o 00:02:59.701 LIB libspdk_nvme.a 00:02:59.701 LIB libspdk_virtio.a 00:02:59.972 LIB libspdk_accel.a 00:02:59.972 LIB libspdk_event.a 00:03:00.231 CC lib/bdev/bdev.o 00:03:00.231 CC lib/bdev/bdev_zone.o 00:03:00.231 CC lib/bdev/part.o 00:03:00.231 CC lib/bdev/bdev_rpc.o 00:03:00.231 CC lib/bdev/scsi_nvme.o 00:03:02.761 LIB libspdk_blob.a 00:03:02.761 CC lib/lvol/lvol.o 00:03:02.761 CC lib/blobfs/blobfs.o 00:03:02.761 CC lib/blobfs/tree.o 00:03:03.327 LIB libspdk_bdev.a 00:03:03.585 CC lib/nbd/nbd_rpc.o 00:03:03.585 CC lib/nbd/nbd.o 00:03:03.585 CC lib/nvmf/ctrlr_discovery.o 00:03:03.585 CC lib/nvmf/ctrlr.o 00:03:03.585 CC lib/nvmf/subsystem.o 00:03:03.585 CC lib/nvmf/ctrlr_bdev.o 00:03:03.585 CC lib/ftl/ftl_core.o 00:03:03.585 CC lib/scsi/dev.o 00:03:03.585 LIB libspdk_blobfs.a 00:03:03.585 LIB libspdk_lvol.a 00:03:03.585 CC lib/nvmf/nvmf.o 00:03:03.585 CC lib/nvmf/nvmf_rpc.o 00:03:03.844 CC lib/nvmf/transport.o 00:03:03.844 CC lib/scsi/lun.o 00:03:04.103 LIB libspdk_nbd.a 00:03:04.103 CC lib/nvmf/tcp.o 00:03:04.103 CC lib/ftl/ftl_init.o 00:03:04.103 CC lib/nvmf/rdma.o 00:03:04.103 CC lib/scsi/port.o 00:03:04.362 CC lib/ftl/ftl_layout.o 00:03:04.362 CC lib/scsi/scsi.o 00:03:04.362 CC lib/scsi/scsi_bdev.o 00:03:04.620 CC lib/scsi/scsi_pr.o 00:03:04.620 CC lib/ftl/ftl_debug.o 00:03:04.620 CC lib/scsi/scsi_rpc.o 00:03:04.620 CC lib/scsi/task.o 00:03:04.878 CC lib/ftl/ftl_io.o 00:03:04.878 CC lib/ftl/ftl_sb.o 00:03:04.878 CC lib/ftl/ftl_l2p.o 00:03:04.878 CC lib/ftl/ftl_l2p_flat.o 00:03:04.878 CC lib/ftl/ftl_nv_cache.o 00:03:04.878 CC lib/ftl/ftl_band.o 00:03:05.136 CC lib/ftl/ftl_band_ops.o 00:03:05.136 LIB libspdk_scsi.a 00:03:05.136 CC lib/ftl/ftl_writer.o 00:03:05.136 CC lib/ftl/ftl_rq.o 00:03:05.136 CC lib/ftl/ftl_reloc.o 00:03:05.136 CC lib/iscsi/conn.o 00:03:05.394 CC lib/iscsi/init_grp.o 00:03:05.394 CC lib/ftl/ftl_l2p_cache.o 00:03:05.394 CC lib/ftl/ftl_p2l.o 00:03:05.394 CC lib/ftl/mngt/ftl_mngt.o 00:03:05.652 CC lib/iscsi/iscsi.o 00:03:05.652 CC lib/vhost/vhost.o 00:03:05.911 CC lib/vhost/vhost_rpc.o 00:03:05.911 CC lib/vhost/vhost_scsi.o 00:03:05.911 CC lib/vhost/vhost_blk.o 00:03:05.911 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:05.911 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:06.169 CC lib/iscsi/md5.o 00:03:06.169 CC lib/iscsi/param.o 00:03:06.169 CC lib/iscsi/portal_grp.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:06.169 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:06.425 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:06.425 CC lib/vhost/rte_vhost_user.o 00:03:06.425 CC lib/iscsi/tgt_node.o 00:03:06.425 CC lib/iscsi/iscsi_subsystem.o 00:03:06.683 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:06.683 CC lib/iscsi/iscsi_rpc.o 00:03:06.683 CC lib/iscsi/task.o 00:03:06.683 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:06.683 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:06.683 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:06.940 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:06.940 LIB libspdk_nvmf.a 00:03:06.940 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:06.940 CC lib/ftl/utils/ftl_conf.o 00:03:06.940 CC lib/ftl/utils/ftl_md.o 00:03:06.940 CC lib/ftl/utils/ftl_mempool.o 00:03:06.940 CC lib/ftl/utils/ftl_bitmap.o 00:03:07.198 CC lib/ftl/utils/ftl_property.o 00:03:07.198 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:07.198 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:07.198 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:07.198 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:07.198 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:07.456 LIB libspdk_iscsi.a 00:03:07.456 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:07.456 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:07.456 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:07.456 LIB libspdk_vhost.a 00:03:07.456 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:07.456 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:07.456 CC lib/ftl/base/ftl_base_dev.o 00:03:07.456 CC lib/ftl/base/ftl_base_bdev.o 00:03:07.456 CC lib/ftl/ftl_trace.o 00:03:07.715 LIB libspdk_ftl.a 00:03:07.973 CC module/env_dpdk/env_dpdk_rpc.o 00:03:08.231 CC module/blob/bdev/blob_bdev.o 00:03:08.231 CC module/accel/error/accel_error.o 00:03:08.231 CC module/sock/posix/posix.o 00:03:08.231 CC module/accel/ioat/accel_ioat.o 00:03:08.231 CC module/accel/dsa/accel_dsa.o 00:03:08.231 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.231 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.231 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:08.231 CC module/accel/iaa/accel_iaa.o 00:03:08.231 LIB libspdk_env_dpdk_rpc.a 00:03:08.231 CC module/accel/iaa/accel_iaa_rpc.o 00:03:08.231 LIB libspdk_scheduler_gscheduler.a 00:03:08.231 LIB libspdk_scheduler_dpdk_governor.a 00:03:08.231 CC module/accel/ioat/accel_ioat_rpc.o 00:03:08.231 CC module/accel/error/accel_error_rpc.o 00:03:08.231 LIB libspdk_scheduler_dynamic.a 00:03:08.231 CC module/accel/dsa/accel_dsa_rpc.o 00:03:08.489 LIB libspdk_accel_iaa.a 00:03:08.489 LIB libspdk_blob_bdev.a 00:03:08.489 LIB libspdk_accel_ioat.a 00:03:08.489 LIB libspdk_accel_dsa.a 00:03:08.489 LIB libspdk_accel_error.a 00:03:08.489 CC module/bdev/gpt/gpt.o 00:03:08.489 CC module/bdev/null/bdev_null.o 00:03:08.489 CC module/bdev/delay/vbdev_delay.o 00:03:08.489 CC module/bdev/lvol/vbdev_lvol.o 00:03:08.489 CC module/bdev/error/vbdev_error.o 00:03:08.489 CC module/bdev/malloc/bdev_malloc.o 00:03:08.489 CC module/blobfs/bdev/blobfs_bdev.o 00:03:08.489 CC module/bdev/nvme/bdev_nvme.o 00:03:08.489 CC module/bdev/passthru/vbdev_passthru.o 00:03:08.748 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:08.748 CC module/bdev/gpt/vbdev_gpt.o 00:03:08.748 CC module/bdev/error/vbdev_error_rpc.o 00:03:09.007 CC module/bdev/null/bdev_null_rpc.o 00:03:09.007 LIB libspdk_blobfs_bdev.a 00:03:09.007 LIB libspdk_sock_posix.a 00:03:09.007 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:09.007 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:09.007 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:09.007 LIB libspdk_bdev_error.a 00:03:09.007 LIB libspdk_bdev_gpt.a 00:03:09.007 CC module/bdev/raid/bdev_raid.o 00:03:09.007 CC module/bdev/split/vbdev_split.o 00:03:09.007 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:09.007 CC module/bdev/split/vbdev_split_rpc.o 00:03:09.007 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:09.007 LIB libspdk_bdev_null.a 00:03:09.007 LIB libspdk_bdev_passthru.a 00:03:09.007 LIB libspdk_bdev_delay.a 00:03:09.007 CC module/bdev/raid/bdev_raid_rpc.o 00:03:09.007 LIB libspdk_bdev_malloc.a 00:03:09.007 CC module/bdev/raid/bdev_raid_sb.o 00:03:09.265 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:09.265 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:09.265 CC module/bdev/aio/bdev_aio.o 00:03:09.265 LIB libspdk_bdev_split.a 00:03:09.265 CC module/bdev/aio/bdev_aio_rpc.o 00:03:09.265 CC module/bdev/raid/raid0.o 00:03:09.523 CC module/bdev/raid/raid1.o 00:03:09.523 LIB libspdk_bdev_lvol.a 00:03:09.523 CC module/bdev/raid/concat.o 00:03:09.523 CC module/bdev/raid/raid5f.o 00:03:09.523 LIB libspdk_bdev_zone_block.a 00:03:09.523 CC module/bdev/ftl/bdev_ftl.o 00:03:09.523 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:09.523 LIB libspdk_bdev_aio.a 00:03:09.781 CC module/bdev/nvme/nvme_rpc.o 00:03:09.781 CC module/bdev/nvme/bdev_mdns_client.o 00:03:09.781 CC module/bdev/iscsi/bdev_iscsi.o 00:03:09.781 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:09.781 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:09.781 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:09.781 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:09.781 LIB libspdk_bdev_ftl.a 00:03:10.039 CC module/bdev/nvme/vbdev_opal.o 00:03:10.039 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:10.039 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:10.039 LIB libspdk_bdev_iscsi.a 00:03:10.298 LIB libspdk_bdev_raid.a 00:03:10.298 LIB libspdk_bdev_virtio.a 00:03:11.235 LIB libspdk_bdev_nvme.a 00:03:11.235 CC module/event/subsystems/sock/sock.o 00:03:11.235 CC module/event/subsystems/vmd/vmd.o 00:03:11.235 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:11.235 CC module/event/subsystems/iobuf/iobuf.o 00:03:11.235 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:11.235 CC module/event/subsystems/scheduler/scheduler.o 00:03:11.235 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:11.494 LIB libspdk_event_sock.a 00:03:11.494 LIB libspdk_event_scheduler.a 00:03:11.494 LIB libspdk_event_vmd.a 00:03:11.494 LIB libspdk_event_vhost_blk.a 00:03:11.494 LIB libspdk_event_iobuf.a 00:03:11.494 CC module/event/subsystems/accel/accel.o 00:03:11.753 LIB libspdk_event_accel.a 00:03:12.012 CC module/event/subsystems/bdev/bdev.o 00:03:12.012 LIB libspdk_event_bdev.a 00:03:12.270 CC module/event/subsystems/scsi/scsi.o 00:03:12.270 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:12.270 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:12.270 CC module/event/subsystems/nbd/nbd.o 00:03:12.528 LIB libspdk_event_nbd.a 00:03:12.528 LIB libspdk_event_scsi.a 00:03:12.528 LIB libspdk_event_nvmf.a 00:03:12.528 CC module/event/subsystems/iscsi/iscsi.o 00:03:12.528 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:12.786 LIB libspdk_event_vhost_scsi.a 00:03:12.786 LIB libspdk_event_iscsi.a 00:03:12.786 CC app/trace_record/trace_record.o 00:03:12.786 TEST_HEADER include/spdk/accel_module.h 00:03:12.786 TEST_HEADER include/spdk/bit_pool.h 00:03:12.786 CXX app/trace/trace.o 00:03:12.786 TEST_HEADER include/spdk/ioat.h 00:03:12.786 TEST_HEADER include/spdk/blobfs.h 00:03:12.786 TEST_HEADER include/spdk/notify.h 00:03:12.786 TEST_HEADER include/spdk/pipe.h 00:03:12.786 TEST_HEADER include/spdk/accel.h 00:03:12.786 TEST_HEADER include/spdk/file.h 00:03:12.786 TEST_HEADER include/spdk/version.h 00:03:12.786 TEST_HEADER include/spdk/trace_parser.h 00:03:12.786 TEST_HEADER include/spdk/opal_spec.h 00:03:12.786 TEST_HEADER include/spdk/uuid.h 00:03:12.786 TEST_HEADER include/spdk/likely.h 00:03:12.786 TEST_HEADER include/spdk/dif.h 00:03:12.786 TEST_HEADER include/spdk/memory.h 00:03:12.786 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:12.786 TEST_HEADER include/spdk/dma.h 00:03:12.786 TEST_HEADER include/spdk/nbd.h 00:03:13.050 TEST_HEADER include/spdk/conf.h 00:03:13.050 CC examples/accel/perf/accel_perf.o 00:03:13.050 TEST_HEADER include/spdk/env_dpdk.h 00:03:13.050 TEST_HEADER include/spdk/nvmf_spec.h 00:03:13.050 TEST_HEADER include/spdk/iscsi_spec.h 00:03:13.050 TEST_HEADER include/spdk/mmio.h 00:03:13.050 CC test/accel/dif/dif.o 00:03:13.050 TEST_HEADER include/spdk/json.h 00:03:13.050 CC test/bdev/bdevio/bdevio.o 00:03:13.050 TEST_HEADER include/spdk/opal.h 00:03:13.050 CC test/dma/test_dma/test_dma.o 00:03:13.050 TEST_HEADER include/spdk/bdev.h 00:03:13.050 CC test/blobfs/mkfs/mkfs.o 00:03:13.050 TEST_HEADER include/spdk/base64.h 00:03:13.050 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:13.050 CC test/app/bdev_svc/bdev_svc.o 00:03:13.050 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:13.050 CC examples/bdev/hello_world/hello_bdev.o 00:03:13.050 TEST_HEADER include/spdk/fd.h 00:03:13.050 TEST_HEADER include/spdk/barrier.h 00:03:13.050 TEST_HEADER include/spdk/scsi_spec.h 00:03:13.050 TEST_HEADER include/spdk/zipf.h 00:03:13.050 TEST_HEADER include/spdk/nvmf.h 00:03:13.050 TEST_HEADER include/spdk/queue.h 00:03:13.050 TEST_HEADER include/spdk/xor.h 00:03:13.050 TEST_HEADER include/spdk/cpuset.h 00:03:13.050 TEST_HEADER include/spdk/thread.h 00:03:13.050 TEST_HEADER include/spdk/bdev_zone.h 00:03:13.050 TEST_HEADER include/spdk/fd_group.h 00:03:13.050 TEST_HEADER include/spdk/tree.h 00:03:13.050 TEST_HEADER include/spdk/blob_bdev.h 00:03:13.050 TEST_HEADER include/spdk/crc64.h 00:03:13.050 TEST_HEADER include/spdk/assert.h 00:03:13.050 TEST_HEADER include/spdk/nvme_spec.h 00:03:13.050 TEST_HEADER include/spdk/endian.h 00:03:13.050 TEST_HEADER include/spdk/pci_ids.h 00:03:13.051 TEST_HEADER include/spdk/log.h 00:03:13.051 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:13.051 TEST_HEADER include/spdk/ftl.h 00:03:13.051 TEST_HEADER include/spdk/config.h 00:03:13.051 TEST_HEADER include/spdk/vhost.h 00:03:13.051 TEST_HEADER include/spdk/bdev_module.h 00:03:13.051 TEST_HEADER include/spdk/nvme_intel.h 00:03:13.051 TEST_HEADER include/spdk/idxd_spec.h 00:03:13.051 TEST_HEADER include/spdk/crc16.h 00:03:13.051 TEST_HEADER include/spdk/nvme.h 00:03:13.051 TEST_HEADER include/spdk/stdinc.h 00:03:13.051 TEST_HEADER include/spdk/scsi.h 00:03:13.051 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:13.051 TEST_HEADER include/spdk/idxd.h 00:03:13.051 TEST_HEADER include/spdk/hexlify.h 00:03:13.051 TEST_HEADER include/spdk/reduce.h 00:03:13.051 TEST_HEADER include/spdk/crc32.h 00:03:13.051 TEST_HEADER include/spdk/init.h 00:03:13.051 TEST_HEADER include/spdk/nvmf_transport.h 00:03:13.051 TEST_HEADER include/spdk/nvme_zns.h 00:03:13.051 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:13.051 TEST_HEADER include/spdk/util.h 00:03:13.051 TEST_HEADER include/spdk/jsonrpc.h 00:03:13.051 TEST_HEADER include/spdk/env.h 00:03:13.051 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:13.051 TEST_HEADER include/spdk/lvol.h 00:03:13.051 TEST_HEADER include/spdk/histogram_data.h 00:03:13.051 TEST_HEADER include/spdk/event.h 00:03:13.051 TEST_HEADER include/spdk/trace.h 00:03:13.051 TEST_HEADER include/spdk/ioat_spec.h 00:03:13.051 TEST_HEADER include/spdk/string.h 00:03:13.051 TEST_HEADER include/spdk/ublk.h 00:03:13.051 TEST_HEADER include/spdk/bit_array.h 00:03:13.051 TEST_HEADER include/spdk/scheduler.h 00:03:13.051 TEST_HEADER include/spdk/blob.h 00:03:13.051 TEST_HEADER include/spdk/gpt_spec.h 00:03:13.051 TEST_HEADER include/spdk/sock.h 00:03:13.051 TEST_HEADER include/spdk/vmd.h 00:03:13.051 TEST_HEADER include/spdk/rpc.h 00:03:13.051 CXX test/cpp_headers/accel_module.o 00:03:13.349 LINK spdk_trace_record 00:03:13.349 LINK bdev_svc 00:03:13.349 LINK mkfs 00:03:13.349 LINK hello_bdev 00:03:13.349 CXX test/cpp_headers/bit_pool.o 00:03:13.349 LINK test_dma 00:03:13.349 LINK bdevio 00:03:13.349 LINK spdk_trace 00:03:13.608 LINK dif 00:03:13.608 CXX test/cpp_headers/ioat.o 00:03:13.608 LINK accel_perf 00:03:13.608 CXX test/cpp_headers/blobfs.o 00:03:13.865 CC app/nvmf_tgt/nvmf_main.o 00:03:13.865 CXX test/cpp_headers/notify.o 00:03:13.865 CC examples/bdev/bdevperf/bdevperf.o 00:03:13.865 LINK nvmf_tgt 00:03:13.865 CXX test/cpp_headers/pipe.o 00:03:14.122 CXX test/cpp_headers/accel.o 00:03:14.380 CXX test/cpp_headers/file.o 00:03:14.380 CXX test/cpp_headers/version.o 00:03:14.380 CC examples/blob/hello_world/hello_blob.o 00:03:14.638 CXX test/cpp_headers/trace_parser.o 00:03:14.638 LINK bdevperf 00:03:14.638 CXX test/cpp_headers/opal_spec.o 00:03:14.638 LINK hello_blob 00:03:14.895 CXX test/cpp_headers/uuid.o 00:03:15.153 CXX test/cpp_headers/likely.o 00:03:15.153 CXX test/cpp_headers/dif.o 00:03:15.410 CXX test/cpp_headers/memory.o 00:03:15.410 CXX test/cpp_headers/vfio_user_pci.o 00:03:15.667 CXX test/cpp_headers/dma.o 00:03:15.667 CXX test/cpp_headers/nbd.o 00:03:15.924 CXX test/cpp_headers/conf.o 00:03:15.924 CXX test/cpp_headers/env_dpdk.o 00:03:15.924 CC app/iscsi_tgt/iscsi_tgt.o 00:03:15.924 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:15.924 CXX test/cpp_headers/nvmf_spec.o 00:03:15.924 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:16.181 CXX test/cpp_headers/iscsi_spec.o 00:03:16.181 CC examples/blob/cli/blobcli.o 00:03:16.181 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:16.181 LINK iscsi_tgt 00:03:16.181 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:16.181 CXX test/cpp_headers/mmio.o 00:03:16.438 CC test/env/mem_callbacks/mem_callbacks.o 00:03:16.438 CXX test/cpp_headers/json.o 00:03:16.438 LINK nvme_fuzz 00:03:16.438 CXX test/cpp_headers/opal.o 00:03:16.694 LINK blobcli 00:03:16.694 LINK vhost_fuzz 00:03:16.694 CXX test/cpp_headers/bdev.o 00:03:16.694 LINK mem_callbacks 00:03:16.951 CXX test/cpp_headers/base64.o 00:03:16.951 CC app/spdk_tgt/spdk_tgt.o 00:03:16.951 CXX test/cpp_headers/blobfs_bdev.o 00:03:17.208 LINK spdk_tgt 00:03:17.208 CC test/env/vtophys/vtophys.o 00:03:17.208 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:17.208 CXX test/cpp_headers/nvme_ocssd.o 00:03:17.208 LINK vtophys 00:03:17.208 CXX test/cpp_headers/fd.o 00:03:17.464 LINK env_dpdk_post_init 00:03:17.464 CXX test/cpp_headers/barrier.o 00:03:17.464 CXX test/cpp_headers/scsi_spec.o 00:03:17.464 CC app/spdk_nvme_perf/perf.o 00:03:17.464 CC app/spdk_lspci/spdk_lspci.o 00:03:17.721 CC app/spdk_nvme_identify/identify.o 00:03:17.721 CXX test/cpp_headers/zipf.o 00:03:17.721 LINK spdk_lspci 00:03:17.721 CXX test/cpp_headers/nvmf.o 00:03:17.978 CXX test/cpp_headers/queue.o 00:03:17.978 LINK iscsi_fuzz 00:03:17.978 CC test/app/histogram_perf/histogram_perf.o 00:03:17.978 CXX test/cpp_headers/xor.o 00:03:18.235 LINK histogram_perf 00:03:18.235 CXX test/cpp_headers/cpuset.o 00:03:18.235 CXX test/cpp_headers/thread.o 00:03:18.235 CC test/env/memory/memory_ut.o 00:03:18.493 LINK spdk_nvme_identify 00:03:18.493 LINK spdk_nvme_perf 00:03:18.493 CXX test/cpp_headers/bdev_zone.o 00:03:18.749 CXX test/cpp_headers/fd_group.o 00:03:18.749 CXX test/cpp_headers/tree.o 00:03:18.749 CC test/env/pci/pci_ut.o 00:03:18.749 CXX test/cpp_headers/blob_bdev.o 00:03:18.749 CC test/app/jsoncat/jsoncat.o 00:03:19.005 CXX test/cpp_headers/crc64.o 00:03:19.005 LINK jsoncat 00:03:19.005 CXX test/cpp_headers/assert.o 00:03:19.005 CC test/app/stub/stub.o 00:03:19.263 LINK memory_ut 00:03:19.263 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.263 LINK pci_ut 00:03:19.263 CXX test/cpp_headers/nvme_spec.o 00:03:19.263 LINK stub 00:03:19.263 CC examples/ioat/perf/perf.o 00:03:19.263 CXX test/cpp_headers/endian.o 00:03:19.263 LINK spdk_nvme_discover 00:03:19.521 CXX test/cpp_headers/pci_ids.o 00:03:19.521 CXX test/cpp_headers/log.o 00:03:19.780 CC app/spdk_top/spdk_top.o 00:03:19.780 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:19.780 LINK ioat_perf 00:03:19.780 CC examples/sock/hello_world/hello_sock.o 00:03:19.780 CC examples/nvme/hello_world/hello_world.o 00:03:19.780 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.082 CC examples/nvmf/nvmf/nvmf.o 00:03:20.082 CXX test/cpp_headers/ftl.o 00:03:20.082 LINK lsvmd 00:03:20.082 LINK hello_sock 00:03:20.082 LINK hello_world 00:03:20.339 CC examples/vmd/led/led.o 00:03:20.339 CXX test/cpp_headers/config.o 00:03:20.339 CC examples/ioat/verify/verify.o 00:03:20.339 LINK nvmf 00:03:20.339 CXX test/cpp_headers/vhost.o 00:03:20.339 CC test/event/event_perf/event_perf.o 00:03:20.339 CC app/vhost/vhost.o 00:03:20.597 LINK led 00:03:20.597 CXX test/cpp_headers/bdev_module.o 00:03:20.597 LINK event_perf 00:03:20.597 LINK verify 00:03:20.597 LINK vhost 00:03:20.856 CXX test/cpp_headers/nvme_intel.o 00:03:20.856 LINK spdk_top 00:03:20.856 CC test/event/reactor/reactor.o 00:03:20.856 CXX test/cpp_headers/idxd_spec.o 00:03:20.856 LINK reactor 00:03:21.423 CXX test/cpp_headers/crc16.o 00:03:21.423 CC examples/nvme/reconnect/reconnect.o 00:03:21.423 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:21.423 CC app/spdk_dd/spdk_dd.o 00:03:21.423 CC examples/util/zipf/zipf.o 00:03:21.423 CC app/fio/nvme/fio_plugin.o 00:03:21.423 CXX test/cpp_headers/nvme.o 00:03:21.682 LINK zipf 00:03:21.682 CC app/fio/bdev/fio_plugin.o 00:03:21.682 CC test/event/reactor_perf/reactor_perf.o 00:03:21.682 CXX test/cpp_headers/stdinc.o 00:03:21.941 LINK reconnect 00:03:21.941 LINK spdk_dd 00:03:21.941 LINK reactor_perf 00:03:21.941 CXX test/cpp_headers/scsi.o 00:03:21.941 LINK nvme_manage 00:03:22.200 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:22.200 LINK spdk_nvme 00:03:22.200 LINK spdk_bdev 00:03:22.457 CXX test/cpp_headers/idxd.o 00:03:22.457 CC examples/thread/thread/thread_ex.o 00:03:22.716 CXX test/cpp_headers/hexlify.o 00:03:22.716 CC test/event/app_repeat/app_repeat.o 00:03:22.716 CXX test/cpp_headers/reduce.o 00:03:22.716 LINK thread 00:03:22.716 LINK app_repeat 00:03:22.975 CXX test/cpp_headers/crc32.o 00:03:22.975 CC examples/nvme/arbitration/arbitration.o 00:03:22.975 CXX test/cpp_headers/init.o 00:03:22.975 CXX test/cpp_headers/nvmf_transport.o 00:03:22.975 CXX test/cpp_headers/nvme_zns.o 00:03:23.234 CXX test/cpp_headers/vfio_user_spec.o 00:03:23.234 CC examples/nvme/hotplug/hotplug.o 00:03:23.234 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:23.234 CC examples/idxd/perf/perf.o 00:03:23.234 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:23.234 LINK arbitration 00:03:23.234 CXX test/cpp_headers/util.o 00:03:23.493 LINK cmb_copy 00:03:23.493 CXX test/cpp_headers/jsonrpc.o 00:03:23.493 LINK interrupt_tgt 00:03:23.493 LINK hotplug 00:03:23.751 CXX test/cpp_headers/env.o 00:03:23.751 LINK idxd_perf 00:03:23.751 CC test/event/scheduler/scheduler.o 00:03:23.751 CXX test/cpp_headers/nvmf_cmd.o 00:03:24.010 LINK scheduler 00:03:24.268 CXX test/cpp_headers/lvol.o 00:03:24.268 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:24.268 CC examples/nvme/abort/abort.o 00:03:24.527 CXX test/cpp_headers/histogram_data.o 00:03:24.527 CXX test/cpp_headers/event.o 00:03:24.527 CXX test/cpp_headers/trace.o 00:03:24.527 LINK pmr_persistence 00:03:24.527 CXX test/cpp_headers/ioat_spec.o 00:03:24.527 CXX test/cpp_headers/string.o 00:03:24.528 CXX test/cpp_headers/ublk.o 00:03:24.786 CC test/lvol/esnap/esnap.o 00:03:24.786 CC test/nvme/aer/aer.o 00:03:24.786 CXX test/cpp_headers/bit_array.o 00:03:24.786 CC test/nvme/reset/reset.o 00:03:24.786 LINK abort 00:03:24.786 CC test/nvme/sgl/sgl.o 00:03:25.045 CXX test/cpp_headers/scheduler.o 00:03:25.045 CC test/nvme/e2edp/nvme_dp.o 00:03:25.045 LINK aer 00:03:25.045 LINK reset 00:03:25.303 CXX test/cpp_headers/blob.o 00:03:25.303 LINK sgl 00:03:25.303 LINK nvme_dp 00:03:25.303 CC test/nvme/overhead/overhead.o 00:03:25.303 CXX test/cpp_headers/gpt_spec.o 00:03:25.303 CXX test/cpp_headers/sock.o 00:03:25.562 CXX test/cpp_headers/vmd.o 00:03:25.562 CC test/nvme/err_injection/err_injection.o 00:03:25.820 CXX test/cpp_headers/rpc.o 00:03:25.820 LINK overhead 00:03:26.079 LINK err_injection 00:03:26.079 CC test/nvme/startup/startup.o 00:03:26.079 CC test/nvme/reserve/reserve.o 00:03:26.079 CC test/rpc_client/rpc_client_test.o 00:03:26.336 LINK rpc_client_test 00:03:26.336 LINK reserve 00:03:26.336 LINK startup 00:03:26.336 CC test/nvme/boot_partition/boot_partition.o 00:03:26.336 CC test/nvme/connect_stress/connect_stress.o 00:03:26.336 CC test/nvme/simple_copy/simple_copy.o 00:03:26.594 LINK boot_partition 00:03:26.594 LINK connect_stress 00:03:26.594 LINK simple_copy 00:03:26.594 CC test/thread/poller_perf/poller_perf.o 00:03:26.852 CC test/thread/lock/spdk_lock.o 00:03:26.852 LINK poller_perf 00:03:26.852 CC test/nvme/compliance/nvme_compliance.o 00:03:27.110 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:27.368 LINK histogram_ut 00:03:27.368 LINK nvme_compliance 00:03:27.368 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:27.368 CC test/nvme/fused_ordering/fused_ordering.o 00:03:27.625 CC test/nvme/fdp/fdp.o 00:03:27.625 CC test/nvme/cuse/cuse.o 00:03:27.625 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:27.625 LINK fused_ordering 00:03:27.625 LINK doorbell_aers 00:03:27.883 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:27.883 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:27.883 LINK fdp 00:03:28.455 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:28.718 LINK cuse 00:03:28.718 LINK scsi_nvme_ut 00:03:28.718 LINK spdk_lock 00:03:28.718 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:28.718 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:28.976 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:28.976 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:28.976 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:29.234 LINK gpt_ut 00:03:29.491 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:29.491 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:29.491 LINK blob_bdev_ut 00:03:29.749 LINK bdev_zone_ut 00:03:29.749 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:29.749 LINK vbdev_lvol_ut 00:03:30.008 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:30.008 LINK tree_ut 00:03:30.266 LINK accel_ut 00:03:30.266 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:30.266 LINK esnap 00:03:30.266 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:30.525 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:30.525 LINK vbdev_zone_block_ut 00:03:30.525 LINK dma_ut 00:03:30.784 CC test/unit/lib/event/app.c/app_ut.o 00:03:30.784 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:31.042 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:31.301 LINK ioat_ut 00:03:31.301 LINK part_ut 00:03:31.560 LINK bdev_raid_ut 00:03:31.560 LINK app_ut 00:03:31.560 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:31.560 LINK blobfs_async_ut 00:03:31.818 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:31.818 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:31.818 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:31.818 LINK init_grp_ut 00:03:32.077 LINK conn_ut 00:03:32.077 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:32.077 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:32.077 LINK bdev_raid_sb_ut 00:03:32.336 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:32.336 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:32.595 LINK reactor_ut 00:03:32.853 LINK bdev_ut 00:03:32.853 LINK param_ut 00:03:32.853 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:32.853 LINK concat_ut 00:03:33.111 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:33.111 CC test/unit/lib/log/log.c/log_ut.o 00:03:33.111 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:33.369 LINK jsonrpc_server_ut 00:03:33.369 LINK bdev_ut 00:03:33.369 LINK blobfs_sync_ut 00:03:33.369 LINK log_ut 00:03:33.627 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:33.627 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:33.627 LINK portal_grp_ut 00:03:33.627 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:33.627 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:33.885 LINK raid1_ut 00:03:33.885 LINK blobfs_bdev_ut 00:03:34.143 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:34.143 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:34.143 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:34.402 LINK tgt_node_ut 00:03:34.402 LINK notify_ut 00:03:34.402 LINK json_parse_ut 00:03:34.659 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:34.659 LINK raid5f_ut 00:03:34.659 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:34.659 LINK iscsi_ut 00:03:34.659 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:34.917 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:34.917 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:35.175 LINK bdev_nvme_ut 00:03:35.433 LINK json_util_ut 00:03:35.433 LINK nvme_ut 00:03:35.433 LINK lvol_ut 00:03:35.433 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:35.692 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:35.692 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:35.692 LINK json_write_ut 00:03:35.692 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:35.951 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:35.951 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:35.951 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:35.951 LINK nvme_ctrlr_cmd_ut 00:03:35.951 LINK dev_ut 00:03:36.209 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:36.209 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:36.466 LINK blob_ut 00:03:36.722 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:36.979 LINK ctrlr_bdev_ut 00:03:36.979 LINK lun_ut 00:03:36.979 LINK nvmf_ut 00:03:37.240 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:37.240 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:37.501 LINK scsi_ut 00:03:37.501 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:37.501 LINK nvme_ctrlr_ut 00:03:37.501 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:37.759 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:37.759 LINK ctrlr_discovery_ut 00:03:37.759 LINK subsystem_ut 00:03:38.017 LINK posix_ut 00:03:38.275 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:38.275 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:38.275 LINK sock_ut 00:03:38.275 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:38.533 LINK nvme_ns_ut 00:03:38.533 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:38.533 LINK ctrlr_ut 00:03:38.533 LINK scsi_bdev_ut 00:03:38.790 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:38.790 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:39.048 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:39.048 LINK tcp_ut 00:03:39.306 LINK scsi_pr_ut 00:03:39.306 LINK nvme_quirks_ut 00:03:39.564 LINK nvme_poll_group_ut 00:03:39.564 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:39.564 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:39.564 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:39.564 LINK nvme_ns_ocssd_cmd_ut 00:03:39.822 LINK rdma_ut 00:03:39.822 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:39.822 LINK nvme_qpair_ut 00:03:39.822 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:40.080 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:40.080 LINK nvme_pcie_ut 00:03:40.080 LINK nvme_ns_cmd_ut 00:03:40.080 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:40.338 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:40.338 LINK nvme_io_msg_ut 00:03:40.338 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:40.596 LINK nvme_transport_ut 00:03:40.596 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:40.597 LINK transport_ut 00:03:40.597 LINK nvme_fabric_ut 00:03:40.597 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:40.855 LINK nvme_opal_ut 00:03:40.855 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:40.855 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:40.855 LINK base64_ut 00:03:41.113 LINK nvme_pcie_common_ut 00:03:41.113 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:41.371 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:41.371 LINK cpuset_ut 00:03:41.371 LINK iobuf_ut 00:03:41.371 LINK pci_event_ut 00:03:41.371 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:41.371 LINK bit_array_ut 00:03:41.371 LINK crc16_ut 00:03:41.371 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:41.371 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:41.629 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:41.629 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:41.629 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:41.629 LINK crc32_ieee_ut 00:03:41.629 LINK crc32c_ut 00:03:41.629 LINK nvme_cuse_ut 00:03:41.629 LINK crc64_ut 00:03:41.888 LINK subsystem_ut 00:03:41.888 LINK iov_ut 00:03:41.888 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:41.888 CC test/unit/lib/util/math.c/math_ut.o 00:03:41.888 CC test/unit/lib/util/string.c/string_ut.o 00:03:41.888 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:42.147 LINK nvme_tcp_ut 00:03:42.147 LINK math_ut 00:03:42.147 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:42.147 LINK nvme_rdma_ut 00:03:42.406 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:42.406 LINK string_ut 00:03:42.406 LINK rpc_ut 00:03:42.406 LINK pipe_ut 00:03:42.406 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:42.664 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:42.664 LINK xor_ut 00:03:42.664 LINK thread_ut 00:03:42.923 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:42.923 LINK idxd_user_ut 00:03:42.923 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:42.923 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:42.923 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:42.923 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:42.923 LINK dif_ut 00:03:43.182 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:43.182 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:43.182 LINK ftl_bitmap_ut 00:03:43.182 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:43.441 LINK common_ut 00:03:43.441 LINK idxd_ut 00:03:43.441 LINK ftl_l2p_ut 00:03:43.441 LINK ftl_mempool_ut 00:03:43.441 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:43.441 LINK ftl_io_ut 00:03:44.045 LINK ftl_mngt_ut 00:03:44.312 LINK ftl_band_ut 00:03:44.312 LINK vhost_ut 00:03:44.571 LINK ftl_sb_ut 00:03:44.571 LINK ftl_layout_upgrade_ut 00:03:44.829 00:03:44.829 real 1m52.884s 00:03:44.829 user 9m32.641s 00:03:44.829 sys 1m44.161s 00:03:44.829 16:19:21 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:44.829 ************************************ 00:03:44.829 END TEST unittest_build 00:03:44.829 ************************************ 00:03:44.830 16:19:21 -- common/autotest_common.sh@10 -- $ set +x 00:03:44.830 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:45.088 16:19:21 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:45.088 16:19:21 -- nvmf/common.sh@7 -- # uname -s 00:03:45.088 16:19:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:45.088 16:19:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:45.088 16:19:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:45.088 16:19:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:45.088 16:19:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:45.088 16:19:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:45.088 16:19:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:45.088 16:19:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:45.088 16:19:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:45.088 16:19:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.088 16:19:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ab93c45-9394-44e0-a6fb-5fc18803f29d 00:03:45.088 16:19:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=9ab93c45-9394-44e0-a6fb-5fc18803f29d 00:03:45.088 16:19:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.088 16:19:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.088 16:19:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:45.088 16:19:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:45.088 16:19:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.089 16:19:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.089 16:19:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.089 16:19:21 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:45.089 16:19:21 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:45.089 16:19:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:45.089 16:19:21 -- paths/export.sh@5 -- # export PATH 00:03:45.089 16:19:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:45.089 16:19:21 -- nvmf/common.sh@46 -- # : 0 00:03:45.089 16:19:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:45.089 16:19:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:45.089 16:19:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:45.089 16:19:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.089 16:19:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.089 16:19:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:45.089 16:19:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:45.089 16:19:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:45.089 16:19:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:45.089 16:19:21 -- spdk/autotest.sh@32 -- # uname -s 00:03:45.089 16:19:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:45.089 16:19:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:45.089 16:19:21 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.089 16:19:21 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:45.089 16:19:21 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.089 16:19:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:45.657 16:19:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:45.657 16:19:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:45.657 16:19:22 -- spdk/autotest.sh@48 -- # udevadm_pid=93836 00:03:45.657 16:19:22 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:45.657 16:19:22 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:45.657 16:19:22 -- spdk/autotest.sh@54 -- # echo 93869 00:03:45.657 16:19:22 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:45.657 16:19:22 -- spdk/autotest.sh@56 -- # echo 93952 00:03:45.657 16:19:22 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:45.657 16:19:22 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:45.657 16:19:22 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:45.657 16:19:22 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:45.657 16:19:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:45.657 16:19:22 -- common/autotest_common.sh@10 -- # set +x 00:03:45.657 16:19:22 -- spdk/autotest.sh@70 -- # create_test_list 00:03:45.657 16:19:22 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:45.657 16:19:22 -- common/autotest_common.sh@10 -- # set +x 00:03:45.657 16:19:22 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:45.657 16:19:22 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:45.657 16:19:22 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:45.657 16:19:22 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:45.657 16:19:22 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:45.657 16:19:22 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:45.657 16:19:22 -- common/autotest_common.sh@1440 -- # uname 00:03:45.657 16:19:22 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:45.657 16:19:22 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:45.657 16:19:22 -- common/autotest_common.sh@1460 -- # uname 00:03:45.657 16:19:22 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:45.657 16:19:22 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:45.657 16:19:22 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:45.657 16:19:22 -- spdk/autotest.sh@83 -- # hash lcov 00:03:45.657 16:19:22 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:45.657 16:19:22 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:45.657 --rc lcov_branch_coverage=1 00:03:45.657 --rc lcov_function_coverage=1 00:03:45.657 --rc genhtml_branch_coverage=1 00:03:45.657 --rc genhtml_function_coverage=1 00:03:45.657 --rc genhtml_legend=1 00:03:45.657 --rc geninfo_all_blocks=1 00:03:45.657 ' 00:03:45.657 16:19:22 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:45.657 --rc lcov_branch_coverage=1 00:03:45.657 --rc lcov_function_coverage=1 00:03:45.657 --rc genhtml_branch_coverage=1 00:03:45.657 --rc genhtml_function_coverage=1 00:03:45.657 --rc genhtml_legend=1 00:03:45.657 --rc geninfo_all_blocks=1 00:03:45.657 ' 00:03:45.657 16:19:22 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:45.657 --rc lcov_branch_coverage=1 00:03:45.657 --rc lcov_function_coverage=1 00:03:45.657 --rc genhtml_branch_coverage=1 00:03:45.657 --rc genhtml_function_coverage=1 00:03:45.657 --rc genhtml_legend=1 00:03:45.657 --rc geninfo_all_blocks=1 00:03:45.657 --no-external' 00:03:45.657 16:19:22 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:45.657 --rc lcov_branch_coverage=1 00:03:45.657 --rc lcov_function_coverage=1 00:03:45.657 --rc genhtml_branch_coverage=1 00:03:45.657 --rc genhtml_function_coverage=1 00:03:45.657 --rc genhtml_legend=1 00:03:45.657 --rc geninfo_all_blocks=1 00:03:45.657 --no-external' 00:03:45.657 16:19:22 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:45.657 lcov: LCOV version 1.15 00:03:45.657 16:19:22 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:47.558 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:47.558 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:47.559 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:47.559 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:47.817 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:47.817 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:34.481 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:34.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:34.481 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:34.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:34.481 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:34.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:34.481 16:20:09 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:34.481 16:20:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:34.481 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:04:34.481 16:20:09 -- spdk/autotest.sh@102 -- # rm -f 00:04:34.481 16:20:09 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:34.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:34.481 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:34.481 16:20:10 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:34.481 16:20:10 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:34.481 16:20:10 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:34.481 16:20:10 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:34.481 16:20:10 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:34.481 16:20:10 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:34.481 16:20:10 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:34.481 16:20:10 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:34.481 16:20:10 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:34.481 16:20:10 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:34.481 16:20:10 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:34.481 16:20:10 -- spdk/autotest.sh@121 -- # grep -v p 00:04:34.481 16:20:10 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:34.481 16:20:10 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:34.481 16:20:10 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:34.481 16:20:10 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:34.481 16:20:10 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:34.481 No valid GPT data, bailing 00:04:34.481 16:20:10 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:34.481 16:20:10 -- scripts/common.sh@393 -- # pt= 00:04:34.481 16:20:10 -- scripts/common.sh@394 -- # return 1 00:04:34.481 16:20:10 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:34.481 1+0 records in 00:04:34.481 1+0 records out 00:04:34.481 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0354981 s, 29.5 MB/s 00:04:34.481 16:20:10 -- spdk/autotest.sh@129 -- # sync 00:04:34.481 16:20:10 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:34.481 16:20:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:34.481 16:20:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:34.739 16:20:11 -- spdk/autotest.sh@135 -- # uname -s 00:04:34.739 16:20:11 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:34.739 16:20:11 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:34.739 16:20:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.739 16:20:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.739 16:20:11 -- common/autotest_common.sh@10 -- # set +x 00:04:34.739 ************************************ 00:04:34.739 START TEST setup.sh 00:04:34.739 ************************************ 00:04:34.739 16:20:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:34.739 * Looking for test storage... 00:04:34.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.739 16:20:11 -- setup/test-setup.sh@10 -- # uname -s 00:04:34.739 16:20:11 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:34.739 16:20:11 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:34.739 16:20:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.739 16:20:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.739 16:20:11 -- common/autotest_common.sh@10 -- # set +x 00:04:34.739 ************************************ 00:04:34.739 START TEST acl 00:04:34.739 ************************************ 00:04:34.739 16:20:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:34.998 * Looking for test storage... 00:04:34.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:34.998 16:20:11 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:34.998 16:20:11 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:34.998 16:20:11 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:34.998 16:20:11 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:34.998 16:20:11 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:34.998 16:20:11 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:34.998 16:20:11 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:34.998 16:20:11 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:34.998 16:20:11 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:34.998 16:20:11 -- setup/acl.sh@12 -- # devs=() 00:04:34.998 16:20:11 -- setup/acl.sh@12 -- # declare -a devs 00:04:34.998 16:20:11 -- setup/acl.sh@13 -- # drivers=() 00:04:34.998 16:20:11 -- setup/acl.sh@13 -- # declare -A drivers 00:04:34.998 16:20:11 -- setup/acl.sh@51 -- # setup reset 00:04:34.998 16:20:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.998 16:20:11 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.257 16:20:12 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:35.257 16:20:12 -- setup/acl.sh@16 -- # local dev driver 00:04:35.257 16:20:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.257 16:20:12 -- setup/acl.sh@15 -- # setup output status 00:04:35.257 16:20:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.257 16:20:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:35.516 Hugepages 00:04:35.516 node hugesize free / total 00:04:35.516 16:20:12 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:35.516 16:20:12 -- setup/acl.sh@19 -- # continue 00:04:35.516 16:20:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.516 00:04:35.516 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.516 16:20:12 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:35.516 16:20:12 -- setup/acl.sh@19 -- # continue 00:04:35.516 16:20:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.516 16:20:12 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:35.516 16:20:12 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:35.516 16:20:12 -- setup/acl.sh@20 -- # continue 00:04:35.516 16:20:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.774 16:20:12 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:35.774 16:20:12 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:35.774 16:20:12 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:35.774 16:20:12 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:35.774 16:20:12 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:35.774 16:20:12 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:35.774 16:20:12 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:35.774 16:20:12 -- setup/acl.sh@54 -- # run_test denied denied 00:04:35.774 16:20:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.774 16:20:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.774 16:20:12 -- common/autotest_common.sh@10 -- # set +x 00:04:35.774 ************************************ 00:04:35.774 START TEST denied 00:04:35.774 ************************************ 00:04:35.774 16:20:12 -- common/autotest_common.sh@1104 -- # denied 00:04:35.774 16:20:12 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:35.774 16:20:12 -- setup/acl.sh@38 -- # setup output config 00:04:35.774 16:20:12 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:35.774 16:20:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.774 16:20:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.148 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:37.149 16:20:13 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:37.149 16:20:13 -- setup/acl.sh@28 -- # local dev driver 00:04:37.149 16:20:13 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:37.149 16:20:13 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:37.149 16:20:13 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:37.149 16:20:13 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:37.149 16:20:13 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:37.149 16:20:13 -- setup/acl.sh@41 -- # setup reset 00:04:37.149 16:20:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.149 16:20:13 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.407 00:04:37.407 real 0m1.831s 00:04:37.407 user 0m0.542s 00:04:37.407 sys 0m1.342s 00:04:37.407 16:20:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.407 16:20:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.407 ************************************ 00:04:37.407 END TEST denied 00:04:37.407 ************************************ 00:04:37.407 16:20:14 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:37.407 16:20:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.407 16:20:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.407 16:20:14 -- common/autotest_common.sh@10 -- # set +x 00:04:37.666 ************************************ 00:04:37.666 START TEST allowed 00:04:37.666 ************************************ 00:04:37.666 16:20:14 -- common/autotest_common.sh@1104 -- # allowed 00:04:37.666 16:20:14 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:37.666 16:20:14 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:37.666 16:20:14 -- setup/acl.sh@45 -- # setup output config 00:04:37.666 16:20:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.666 16:20:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:39.039 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:39.039 16:20:15 -- setup/acl.sh@47 -- # verify 00:04:39.039 16:20:15 -- setup/acl.sh@28 -- # local dev driver 00:04:39.039 16:20:15 -- setup/acl.sh@48 -- # setup reset 00:04:39.039 16:20:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.039 16:20:15 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.608 00:04:39.608 real 0m1.987s 00:04:39.608 user 0m0.479s 00:04:39.608 sys 0m1.470s 00:04:39.608 16:20:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.608 ************************************ 00:04:39.608 END TEST allowed 00:04:39.608 ************************************ 00:04:39.608 16:20:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.608 00:04:39.608 real 0m4.731s 00:04:39.608 user 0m1.576s 00:04:39.608 sys 0m3.204s 00:04:39.608 16:20:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.608 16:20:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.608 ************************************ 00:04:39.608 END TEST acl 00:04:39.608 ************************************ 00:04:39.608 16:20:16 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:39.608 16:20:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.608 16:20:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.608 16:20:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.608 ************************************ 00:04:39.608 START TEST hugepages 00:04:39.608 ************************************ 00:04:39.608 16:20:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:39.608 * Looking for test storage... 00:04:39.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:39.608 16:20:16 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:39.608 16:20:16 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:39.608 16:20:16 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:39.608 16:20:16 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:39.608 16:20:16 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:39.608 16:20:16 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:39.608 16:20:16 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:39.608 16:20:16 -- setup/common.sh@18 -- # local node= 00:04:39.608 16:20:16 -- setup/common.sh@19 -- # local var val 00:04:39.608 16:20:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.608 16:20:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.608 16:20:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.608 16:20:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.608 16:20:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.608 16:20:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.608 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.608 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.608 16:20:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 3109152 kB' 'MemAvailable: 7422292 kB' 'Buffers: 37528 kB' 'Cached: 4401428 kB' 'SwapCached: 0 kB' 'Active: 1187196 kB' 'Inactive: 3365920 kB' 'Active(anon): 123188 kB' 'Inactive(anon): 1796 kB' 'Active(file): 1064008 kB' 'Inactive(file): 3364124 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 372 kB' 'Writeback: 0 kB' 'AnonPages: 132960 kB' 'Mapped: 73768 kB' 'Shmem: 2620 kB' 'KReclaimable: 206872 kB' 'Slab: 298260 kB' 'SReclaimable: 206872 kB' 'SUnreclaim: 91388 kB' 'KernelStack: 4520 kB' 'PageTables: 3576 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028396 kB' 'Committed_AS: 580712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:39.608 16:20:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.608 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.609 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.609 16:20:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.610 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.610 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.610 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.610 16:20:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.610 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.610 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.610 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.610 16:20:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.610 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.610 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.610 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.610 16:20:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.610 16:20:16 -- setup/common.sh@32 -- # continue 00:04:39.610 16:20:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.610 16:20:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.610 16:20:16 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.610 16:20:16 -- setup/common.sh@33 -- # echo 2048 00:04:39.610 16:20:16 -- setup/common.sh@33 -- # return 0 00:04:39.610 16:20:16 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:39.610 16:20:16 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:39.610 16:20:16 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:39.610 16:20:16 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:39.610 16:20:16 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:39.610 16:20:16 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:39.610 16:20:16 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:39.610 16:20:16 -- setup/hugepages.sh@207 -- # get_nodes 00:04:39.610 16:20:16 -- setup/hugepages.sh@27 -- # local node 00:04:39.610 16:20:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.610 16:20:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:39.610 16:20:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:39.610 16:20:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.610 16:20:16 -- setup/hugepages.sh@208 -- # clear_hp 00:04:39.610 16:20:16 -- setup/hugepages.sh@37 -- # local node hp 00:04:39.610 16:20:16 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.610 16:20:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.610 16:20:16 -- setup/hugepages.sh@41 -- # echo 0 00:04:39.610 16:20:16 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.610 16:20:16 -- setup/hugepages.sh@41 -- # echo 0 00:04:39.610 16:20:16 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:39.610 16:20:16 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:39.610 16:20:16 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:39.610 16:20:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.610 16:20:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.610 16:20:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.610 ************************************ 00:04:39.610 START TEST default_setup 00:04:39.610 ************************************ 00:04:39.610 16:20:16 -- common/autotest_common.sh@1104 -- # default_setup 00:04:39.610 16:20:16 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:39.610 16:20:16 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.610 16:20:16 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:39.610 16:20:16 -- setup/hugepages.sh@51 -- # shift 00:04:39.610 16:20:16 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:39.610 16:20:16 -- setup/hugepages.sh@52 -- # local node_ids 00:04:39.610 16:20:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.610 16:20:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.610 16:20:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:39.610 16:20:16 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:39.610 16:20:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.610 16:20:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.610 16:20:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:39.610 16:20:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.610 16:20:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.610 16:20:16 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:39.610 16:20:16 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:39.610 16:20:16 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:39.869 16:20:16 -- setup/hugepages.sh@73 -- # return 0 00:04:39.869 16:20:16 -- setup/hugepages.sh@137 -- # setup output 00:04:39.869 16:20:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.869 16:20:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:40.127 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.697 16:20:17 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:40.698 16:20:17 -- setup/hugepages.sh@89 -- # local node 00:04:40.698 16:20:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.698 16:20:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.698 16:20:17 -- setup/hugepages.sh@92 -- # local surp 00:04:40.698 16:20:17 -- setup/hugepages.sh@93 -- # local resv 00:04:40.698 16:20:17 -- setup/hugepages.sh@94 -- # local anon 00:04:40.698 16:20:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.698 16:20:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.698 16:20:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.698 16:20:17 -- setup/common.sh@18 -- # local node= 00:04:40.698 16:20:17 -- setup/common.sh@19 -- # local var val 00:04:40.698 16:20:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.698 16:20:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.698 16:20:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.698 16:20:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.698 16:20:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.698 16:20:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5196704 kB' 'MemAvailable: 9509880 kB' 'Buffers: 37528 kB' 'Cached: 4401508 kB' 'SwapCached: 0 kB' 'Active: 1202052 kB' 'Inactive: 3365960 kB' 'Active(anon): 138000 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064052 kB' 'Inactive(file): 3364168 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147464 kB' 'Mapped: 73232 kB' 'Shmem: 2616 kB' 'KReclaimable: 206820 kB' 'Slab: 298744 kB' 'SReclaimable: 206820 kB' 'SUnreclaim: 91924 kB' 'KernelStack: 4512 kB' 'PageTables: 3624 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 615588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14260 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.698 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.698 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.699 16:20:17 -- setup/common.sh@33 -- # echo 0 00:04:40.699 16:20:17 -- setup/common.sh@33 -- # return 0 00:04:40.699 16:20:17 -- setup/hugepages.sh@97 -- # anon=0 00:04:40.699 16:20:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.699 16:20:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.699 16:20:17 -- setup/common.sh@18 -- # local node= 00:04:40.699 16:20:17 -- setup/common.sh@19 -- # local var val 00:04:40.699 16:20:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.699 16:20:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.699 16:20:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.699 16:20:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.699 16:20:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.699 16:20:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5196468 kB' 'MemAvailable: 9509644 kB' 'Buffers: 37528 kB' 'Cached: 4401508 kB' 'SwapCached: 0 kB' 'Active: 1202300 kB' 'Inactive: 3365960 kB' 'Active(anon): 138248 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064052 kB' 'Inactive(file): 3364168 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147740 kB' 'Mapped: 73232 kB' 'Shmem: 2616 kB' 'KReclaimable: 206820 kB' 'Slab: 298744 kB' 'SReclaimable: 206820 kB' 'SUnreclaim: 91924 kB' 'KernelStack: 4544 kB' 'PageTables: 3680 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 615588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14276 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.699 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.699 16:20:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.700 16:20:17 -- setup/common.sh@33 -- # echo 0 00:04:40.700 16:20:17 -- setup/common.sh@33 -- # return 0 00:04:40.700 16:20:17 -- setup/hugepages.sh@99 -- # surp=0 00:04:40.700 16:20:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.700 16:20:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.700 16:20:17 -- setup/common.sh@18 -- # local node= 00:04:40.700 16:20:17 -- setup/common.sh@19 -- # local var val 00:04:40.700 16:20:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.700 16:20:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.700 16:20:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.700 16:20:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.700 16:20:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.700 16:20:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5196468 kB' 'MemAvailable: 9509644 kB' 'Buffers: 37528 kB' 'Cached: 4401508 kB' 'SwapCached: 0 kB' 'Active: 1202344 kB' 'Inactive: 3365960 kB' 'Active(anon): 138292 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064052 kB' 'Inactive(file): 3364168 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147736 kB' 'Mapped: 73232 kB' 'Shmem: 2616 kB' 'KReclaimable: 206820 kB' 'Slab: 298744 kB' 'SReclaimable: 206820 kB' 'SUnreclaim: 91924 kB' 'KernelStack: 4512 kB' 'PageTables: 3616 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 621236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14292 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.700 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.700 16:20:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.701 16:20:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.701 16:20:17 -- setup/common.sh@33 -- # echo 0 00:04:40.701 16:20:17 -- setup/common.sh@33 -- # return 0 00:04:40.701 nr_hugepages=1024 00:04:40.701 resv_hugepages=0 00:04:40.701 surplus_hugepages=0 00:04:40.701 anon_hugepages=0 00:04:40.701 16:20:17 -- setup/hugepages.sh@100 -- # resv=0 00:04:40.701 16:20:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.701 16:20:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.701 16:20:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.701 16:20:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.701 16:20:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.701 16:20:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.701 16:20:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.701 16:20:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.701 16:20:17 -- setup/common.sh@18 -- # local node= 00:04:40.701 16:20:17 -- setup/common.sh@19 -- # local var val 00:04:40.701 16:20:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.701 16:20:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.701 16:20:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.701 16:20:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.701 16:20:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.701 16:20:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.701 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5196224 kB' 'MemAvailable: 9509400 kB' 'Buffers: 37528 kB' 'Cached: 4401508 kB' 'SwapCached: 0 kB' 'Active: 1202328 kB' 'Inactive: 3365960 kB' 'Active(anon): 138276 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064052 kB' 'Inactive(file): 3364168 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147584 kB' 'Mapped: 73232 kB' 'Shmem: 2616 kB' 'KReclaimable: 206820 kB' 'Slab: 298744 kB' 'SReclaimable: 206820 kB' 'SUnreclaim: 91924 kB' 'KernelStack: 4564 kB' 'PageTables: 3588 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 619724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.702 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.702 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.703 16:20:17 -- setup/common.sh@33 -- # echo 1024 00:04:40.703 16:20:17 -- setup/common.sh@33 -- # return 0 00:04:40.703 16:20:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.703 16:20:17 -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.703 16:20:17 -- setup/hugepages.sh@27 -- # local node 00:04:40.703 16:20:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.703 16:20:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.703 16:20:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.703 16:20:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.703 16:20:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.703 16:20:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.703 16:20:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.703 16:20:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.703 16:20:17 -- setup/common.sh@18 -- # local node=0 00:04:40.703 16:20:17 -- setup/common.sh@19 -- # local var val 00:04:40.703 16:20:17 -- setup/common.sh@20 -- # local mem_f mem 00:04:40.703 16:20:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.703 16:20:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.703 16:20:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.703 16:20:17 -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.703 16:20:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5196224 kB' 'MemUsed: 7054872 kB' 'Active: 1202588 kB' 'Inactive: 3365960 kB' 'Active(anon): 138536 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064052 kB' 'Inactive(file): 3364168 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'FilePages: 4439036 kB' 'Mapped: 73232 kB' 'AnonPages: 147456 kB' 'Shmem: 2616 kB' 'KernelStack: 4564 kB' 'PageTables: 3588 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206820 kB' 'Slab: 298744 kB' 'SReclaimable: 206820 kB' 'SUnreclaim: 91924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.703 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.703 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # continue 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # IFS=': ' 00:04:40.704 16:20:17 -- setup/common.sh@31 -- # read -r var val _ 00:04:40.704 16:20:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.704 16:20:17 -- setup/common.sh@33 -- # echo 0 00:04:40.704 16:20:17 -- setup/common.sh@33 -- # return 0 00:04:40.704 node0=1024 expecting 1024 00:04:40.704 ************************************ 00:04:40.704 END TEST default_setup 00:04:40.704 ************************************ 00:04:40.704 16:20:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.704 16:20:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.704 16:20:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.704 16:20:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.704 16:20:17 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:40.704 16:20:17 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:40.704 00:04:40.704 real 0m1.088s 00:04:40.704 user 0m0.320s 00:04:40.704 sys 0m0.727s 00:04:40.704 16:20:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.704 16:20:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.963 16:20:17 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:40.963 16:20:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:40.963 16:20:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:40.963 16:20:17 -- common/autotest_common.sh@10 -- # set +x 00:04:40.963 ************************************ 00:04:40.963 START TEST per_node_1G_alloc 00:04:40.963 ************************************ 00:04:40.963 16:20:17 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:40.963 16:20:17 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:40.963 16:20:17 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:40.963 16:20:17 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:40.963 16:20:17 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:40.963 16:20:17 -- setup/hugepages.sh@51 -- # shift 00:04:40.963 16:20:17 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:40.963 16:20:17 -- setup/hugepages.sh@52 -- # local node_ids 00:04:40.963 16:20:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:40.963 16:20:17 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:40.963 16:20:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:40.963 16:20:17 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:40.963 16:20:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:40.963 16:20:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:40.963 16:20:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:40.963 16:20:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:40.963 16:20:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:40.963 16:20:17 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:40.963 16:20:17 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:40.963 16:20:17 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:40.963 16:20:17 -- setup/hugepages.sh@73 -- # return 0 00:04:40.963 16:20:17 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:40.963 16:20:17 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:40.963 16:20:17 -- setup/hugepages.sh@146 -- # setup output 00:04:40.963 16:20:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.963 16:20:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.222 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:41.222 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.485 16:20:18 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:41.485 16:20:18 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:41.485 16:20:18 -- setup/hugepages.sh@89 -- # local node 00:04:41.485 16:20:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.485 16:20:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.485 16:20:18 -- setup/hugepages.sh@92 -- # local surp 00:04:41.485 16:20:18 -- setup/hugepages.sh@93 -- # local resv 00:04:41.485 16:20:18 -- setup/hugepages.sh@94 -- # local anon 00:04:41.485 16:20:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.485 16:20:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.485 16:20:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.485 16:20:18 -- setup/common.sh@18 -- # local node= 00:04:41.485 16:20:18 -- setup/common.sh@19 -- # local var val 00:04:41.485 16:20:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.485 16:20:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.485 16:20:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.485 16:20:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.485 16:20:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.485 16:20:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 6241016 kB' 'MemAvailable: 10554192 kB' 'Buffers: 37528 kB' 'Cached: 4401508 kB' 'SwapCached: 0 kB' 'Active: 1202532 kB' 'Inactive: 3365964 kB' 'Active(anon): 138484 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064048 kB' 'Inactive(file): 3364172 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147820 kB' 'Mapped: 73468 kB' 'Shmem: 2616 kB' 'KReclaimable: 206820 kB' 'Slab: 298704 kB' 'SReclaimable: 206820 kB' 'SUnreclaim: 91884 kB' 'KernelStack: 4632 kB' 'PageTables: 3660 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 617872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.485 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.485 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.486 16:20:18 -- setup/common.sh@33 -- # echo 0 00:04:41.486 16:20:18 -- setup/common.sh@33 -- # return 0 00:04:41.486 16:20:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:41.486 16:20:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.486 16:20:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.486 16:20:18 -- setup/common.sh@18 -- # local node= 00:04:41.486 16:20:18 -- setup/common.sh@19 -- # local var val 00:04:41.486 16:20:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.486 16:20:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.486 16:20:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.486 16:20:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.486 16:20:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.486 16:20:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 6241284 kB' 'MemAvailable: 10554460 kB' 'Buffers: 37528 kB' 'Cached: 4401508 kB' 'SwapCached: 0 kB' 'Active: 1202532 kB' 'Inactive: 3365964 kB' 'Active(anon): 138484 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064048 kB' 'Inactive(file): 3364172 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 148056 kB' 'Mapped: 73468 kB' 'Shmem: 2616 kB' 'KReclaimable: 206820 kB' 'Slab: 298704 kB' 'SReclaimable: 206820 kB' 'SUnreclaim: 91884 kB' 'KernelStack: 4616 kB' 'PageTables: 3636 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 622752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.486 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.486 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.487 16:20:18 -- setup/common.sh@33 -- # echo 0 00:04:41.487 16:20:18 -- setup/common.sh@33 -- # return 0 00:04:41.487 16:20:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:41.487 16:20:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.487 16:20:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.487 16:20:18 -- setup/common.sh@18 -- # local node= 00:04:41.487 16:20:18 -- setup/common.sh@19 -- # local var val 00:04:41.487 16:20:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.487 16:20:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.487 16:20:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.487 16:20:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.487 16:20:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.487 16:20:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 6241544 kB' 'MemAvailable: 10554720 kB' 'Buffers: 37528 kB' 'Cached: 4401508 kB' 'SwapCached: 0 kB' 'Active: 1202532 kB' 'Inactive: 3365964 kB' 'Active(anon): 138484 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064048 kB' 'Inactive(file): 3364172 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147540 kB' 'Mapped: 73468 kB' 'Shmem: 2616 kB' 'KReclaimable: 206820 kB' 'Slab: 298704 kB' 'SReclaimable: 206820 kB' 'SUnreclaim: 91884 kB' 'KernelStack: 4616 kB' 'PageTables: 3636 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 617800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.487 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.487 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.488 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.488 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.489 16:20:18 -- setup/common.sh@33 -- # echo 0 00:04:41.489 16:20:18 -- setup/common.sh@33 -- # return 0 00:04:41.489 16:20:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:41.489 16:20:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:41.489 nr_hugepages=512 00:04:41.489 resv_hugepages=0 00:04:41.489 16:20:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.489 surplus_hugepages=0 00:04:41.489 anon_hugepages=0 00:04:41.489 16:20:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.489 16:20:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.489 16:20:18 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:41.489 16:20:18 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:41.489 16:20:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.489 16:20:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.489 16:20:18 -- setup/common.sh@18 -- # local node= 00:04:41.489 16:20:18 -- setup/common.sh@19 -- # local var val 00:04:41.489 16:20:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.489 16:20:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.489 16:20:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.489 16:20:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.489 16:20:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.489 16:20:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 6241892 kB' 'MemAvailable: 10555064 kB' 'Buffers: 37528 kB' 'Cached: 4401504 kB' 'SwapCached: 0 kB' 'Active: 1202080 kB' 'Inactive: 3365944 kB' 'Active(anon): 138016 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064064 kB' 'Inactive(file): 3364152 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147724 kB' 'Mapped: 73248 kB' 'Shmem: 2616 kB' 'KReclaimable: 206820 kB' 'Slab: 298704 kB' 'SReclaimable: 206820 kB' 'SUnreclaim: 91884 kB' 'KernelStack: 4580 kB' 'PageTables: 3688 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 616524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.489 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.489 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.490 16:20:18 -- setup/common.sh@33 -- # echo 512 00:04:41.490 16:20:18 -- setup/common.sh@33 -- # return 0 00:04:41.490 16:20:18 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:41.490 16:20:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.490 16:20:18 -- setup/hugepages.sh@27 -- # local node 00:04:41.490 16:20:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.490 16:20:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.490 16:20:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:41.490 16:20:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.490 16:20:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.490 16:20:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.490 16:20:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.490 16:20:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.490 16:20:18 -- setup/common.sh@18 -- # local node=0 00:04:41.490 16:20:18 -- setup/common.sh@19 -- # local var val 00:04:41.490 16:20:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.490 16:20:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.490 16:20:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.490 16:20:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.490 16:20:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.490 16:20:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 6241892 kB' 'MemUsed: 6009204 kB' 'Active: 1202080 kB' 'Inactive: 3365944 kB' 'Active(anon): 138016 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064064 kB' 'Inactive(file): 3364152 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'FilePages: 4439032 kB' 'Mapped: 73248 kB' 'AnonPages: 147984 kB' 'Shmem: 2616 kB' 'KernelStack: 4580 kB' 'PageTables: 3688 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206820 kB' 'Slab: 298704 kB' 'SReclaimable: 206820 kB' 'SUnreclaim: 91884 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.490 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.490 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # continue 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.491 16:20:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.491 16:20:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.491 16:20:18 -- setup/common.sh@33 -- # echo 0 00:04:41.491 16:20:18 -- setup/common.sh@33 -- # return 0 00:04:41.491 16:20:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.491 16:20:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.491 16:20:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.491 16:20:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.491 node0=512 expecting 512 00:04:41.491 16:20:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:41.491 16:20:18 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:41.491 00:04:41.491 real 0m0.630s 00:04:41.491 user 0m0.259s 00:04:41.491 sys 0m0.403s 00:04:41.491 16:20:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.491 16:20:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.491 ************************************ 00:04:41.491 END TEST per_node_1G_alloc 00:04:41.491 ************************************ 00:04:41.491 16:20:18 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:41.491 16:20:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.491 16:20:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.491 16:20:18 -- common/autotest_common.sh@10 -- # set +x 00:04:41.491 ************************************ 00:04:41.491 START TEST even_2G_alloc 00:04:41.491 ************************************ 00:04:41.491 16:20:18 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:41.491 16:20:18 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:41.491 16:20:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:41.491 16:20:18 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.491 16:20:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.491 16:20:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:41.491 16:20:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.491 16:20:18 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:41.491 16:20:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.491 16:20:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.491 16:20:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:41.491 16:20:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.491 16:20:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.491 16:20:18 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.491 16:20:18 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.491 16:20:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.491 16:20:18 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:41.491 16:20:18 -- setup/hugepages.sh@83 -- # : 0 00:04:41.491 16:20:18 -- setup/hugepages.sh@84 -- # : 0 00:04:41.491 16:20:18 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.491 16:20:18 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:41.491 16:20:18 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:41.491 16:20:18 -- setup/hugepages.sh@153 -- # setup output 00:04:41.491 16:20:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.491 16:20:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:41.750 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:41.750 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:42.317 16:20:19 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:42.317 16:20:19 -- setup/hugepages.sh@89 -- # local node 00:04:42.317 16:20:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.317 16:20:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.317 16:20:19 -- setup/hugepages.sh@92 -- # local surp 00:04:42.317 16:20:19 -- setup/hugepages.sh@93 -- # local resv 00:04:42.317 16:20:19 -- setup/hugepages.sh@94 -- # local anon 00:04:42.317 16:20:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.317 16:20:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.317 16:20:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.317 16:20:19 -- setup/common.sh@18 -- # local node= 00:04:42.317 16:20:19 -- setup/common.sh@19 -- # local var val 00:04:42.317 16:20:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.317 16:20:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.317 16:20:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.317 16:20:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.317 16:20:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.317 16:20:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.317 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.317 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.317 16:20:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5193620 kB' 'MemAvailable: 9506812 kB' 'Buffers: 37528 kB' 'Cached: 4401508 kB' 'SwapCached: 0 kB' 'Active: 1202472 kB' 'Inactive: 3365940 kB' 'Active(anon): 138400 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064072 kB' 'Inactive(file): 3364148 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 148132 kB' 'Mapped: 73464 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298512 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91676 kB' 'KernelStack: 4588 kB' 'PageTables: 3784 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 615596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:42.317 16:20:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.317 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.317 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.318 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.318 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.319 16:20:19 -- setup/common.sh@33 -- # echo 0 00:04:42.319 16:20:19 -- setup/common.sh@33 -- # return 0 00:04:42.319 16:20:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:42.319 16:20:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.319 16:20:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.319 16:20:19 -- setup/common.sh@18 -- # local node= 00:04:42.319 16:20:19 -- setup/common.sh@19 -- # local var val 00:04:42.319 16:20:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.319 16:20:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.319 16:20:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.319 16:20:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.319 16:20:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.319 16:20:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5193620 kB' 'MemAvailable: 9506812 kB' 'Buffers: 37528 kB' 'Cached: 4401508 kB' 'SwapCached: 0 kB' 'Active: 1202472 kB' 'Inactive: 3365940 kB' 'Active(anon): 138400 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064072 kB' 'Inactive(file): 3364148 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 148004 kB' 'Mapped: 73464 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298512 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91676 kB' 'KernelStack: 4588 kB' 'PageTables: 3784 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 615596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.319 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.319 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.320 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.320 16:20:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.321 16:20:19 -- setup/common.sh@33 -- # echo 0 00:04:42.321 16:20:19 -- setup/common.sh@33 -- # return 0 00:04:42.321 16:20:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:42.321 16:20:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.321 16:20:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.321 16:20:19 -- setup/common.sh@18 -- # local node= 00:04:42.321 16:20:19 -- setup/common.sh@19 -- # local var val 00:04:42.321 16:20:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.321 16:20:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.321 16:20:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.321 16:20:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.321 16:20:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.321 16:20:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5193628 kB' 'MemAvailable: 9506820 kB' 'Buffers: 37528 kB' 'Cached: 4401508 kB' 'SwapCached: 0 kB' 'Active: 1202340 kB' 'Inactive: 3365940 kB' 'Active(anon): 138268 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064072 kB' 'Inactive(file): 3364148 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147744 kB' 'Mapped: 73464 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298512 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91676 kB' 'KernelStack: 4556 kB' 'PageTables: 3724 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 620436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.321 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.321 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.322 16:20:19 -- setup/common.sh@33 -- # echo 0 00:04:42.322 16:20:19 -- setup/common.sh@33 -- # return 0 00:04:42.322 16:20:19 -- setup/hugepages.sh@100 -- # resv=0 00:04:42.322 nr_hugepages=1024 00:04:42.322 16:20:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.322 resv_hugepages=0 00:04:42.322 surplus_hugepages=0 00:04:42.322 16:20:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.322 16:20:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.322 anon_hugepages=0 00:04:42.322 16:20:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.322 16:20:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.322 16:20:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.322 16:20:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.322 16:20:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.322 16:20:19 -- setup/common.sh@18 -- # local node= 00:04:42.322 16:20:19 -- setup/common.sh@19 -- # local var val 00:04:42.322 16:20:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.322 16:20:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.322 16:20:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.322 16:20:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.322 16:20:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.322 16:20:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5193572 kB' 'MemAvailable: 9506764 kB' 'Buffers: 37528 kB' 'Cached: 4401508 kB' 'SwapCached: 0 kB' 'Active: 1202580 kB' 'Inactive: 3365940 kB' 'Active(anon): 138508 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064072 kB' 'Inactive(file): 3364148 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 148024 kB' 'Mapped: 73464 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298512 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91676 kB' 'KernelStack: 4636 kB' 'PageTables: 3884 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 619676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.322 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.322 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.323 16:20:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.323 16:20:19 -- setup/common.sh@33 -- # echo 1024 00:04:42.323 16:20:19 -- setup/common.sh@33 -- # return 0 00:04:42.323 16:20:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.323 16:20:19 -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.323 16:20:19 -- setup/hugepages.sh@27 -- # local node 00:04:42.323 16:20:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.323 16:20:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.323 16:20:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:42.323 16:20:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.323 16:20:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.323 16:20:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.323 16:20:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.323 16:20:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.323 16:20:19 -- setup/common.sh@18 -- # local node=0 00:04:42.323 16:20:19 -- setup/common.sh@19 -- # local var val 00:04:42.323 16:20:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:42.323 16:20:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.323 16:20:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.323 16:20:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.323 16:20:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.323 16:20:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.323 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5193580 kB' 'MemUsed: 7057516 kB' 'Active: 1202256 kB' 'Inactive: 3365940 kB' 'Active(anon): 138184 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064072 kB' 'Inactive(file): 3364148 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'FilePages: 4439036 kB' 'Mapped: 73416 kB' 'AnonPages: 147856 kB' 'Shmem: 2616 kB' 'KernelStack: 4620 kB' 'PageTables: 3848 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206836 kB' 'Slab: 298512 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.582 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.582 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # continue 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:42.583 16:20:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:42.583 16:20:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.583 16:20:19 -- setup/common.sh@33 -- # echo 0 00:04:42.583 16:20:19 -- setup/common.sh@33 -- # return 0 00:04:42.583 16:20:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.583 16:20:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.583 16:20:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.583 16:20:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.583 16:20:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.583 node0=1024 expecting 1024 00:04:42.583 16:20:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.583 00:04:42.583 real 0m0.903s 00:04:42.583 user 0m0.209s 00:04:42.583 sys 0m0.725s 00:04:42.583 16:20:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.583 16:20:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.583 ************************************ 00:04:42.583 END TEST even_2G_alloc 00:04:42.583 ************************************ 00:04:42.583 16:20:19 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:42.583 16:20:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.583 16:20:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.583 16:20:19 -- common/autotest_common.sh@10 -- # set +x 00:04:42.583 ************************************ 00:04:42.583 START TEST odd_alloc 00:04:42.583 ************************************ 00:04:42.583 16:20:19 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:42.583 16:20:19 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:42.583 16:20:19 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:42.583 16:20:19 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:42.583 16:20:19 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:42.583 16:20:19 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:42.583 16:20:19 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:42.583 16:20:19 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:42.583 16:20:19 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:42.583 16:20:19 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:42.583 16:20:19 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:42.583 16:20:19 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:42.583 16:20:19 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:42.583 16:20:19 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:42.583 16:20:19 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:42.583 16:20:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.583 16:20:19 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:42.583 16:20:19 -- setup/hugepages.sh@83 -- # : 0 00:04:42.583 16:20:19 -- setup/hugepages.sh@84 -- # : 0 00:04:42.583 16:20:19 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:42.583 16:20:19 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:42.583 16:20:19 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:42.583 16:20:19 -- setup/hugepages.sh@160 -- # setup output 00:04:42.583 16:20:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.583 16:20:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.841 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:42.841 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.409 16:20:20 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:43.409 16:20:20 -- setup/hugepages.sh@89 -- # local node 00:04:43.409 16:20:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.409 16:20:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.409 16:20:20 -- setup/hugepages.sh@92 -- # local surp 00:04:43.409 16:20:20 -- setup/hugepages.sh@93 -- # local resv 00:04:43.409 16:20:20 -- setup/hugepages.sh@94 -- # local anon 00:04:43.409 16:20:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.409 16:20:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.409 16:20:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.409 16:20:20 -- setup/common.sh@18 -- # local node= 00:04:43.409 16:20:20 -- setup/common.sh@19 -- # local var val 00:04:43.410 16:20:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.410 16:20:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.410 16:20:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.410 16:20:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.410 16:20:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.410 16:20:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5190052 kB' 'MemAvailable: 9503248 kB' 'Buffers: 37528 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1202368 kB' 'Inactive: 3365928 kB' 'Active(anon): 138280 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064088 kB' 'Inactive(file): 3364136 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147516 kB' 'Mapped: 73152 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298704 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91868 kB' 'KernelStack: 4568 kB' 'PageTables: 3528 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 626112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.410 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.410 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.411 16:20:20 -- setup/common.sh@33 -- # echo 0 00:04:43.411 16:20:20 -- setup/common.sh@33 -- # return 0 00:04:43.411 16:20:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:43.411 16:20:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.411 16:20:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.411 16:20:20 -- setup/common.sh@18 -- # local node= 00:04:43.411 16:20:20 -- setup/common.sh@19 -- # local var val 00:04:43.411 16:20:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.411 16:20:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.411 16:20:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.411 16:20:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.411 16:20:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.411 16:20:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5190580 kB' 'MemAvailable: 9503776 kB' 'Buffers: 37528 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1202484 kB' 'Inactive: 3365928 kB' 'Active(anon): 138396 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064088 kB' 'Inactive(file): 3364136 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147880 kB' 'Mapped: 73412 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298704 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91868 kB' 'KernelStack: 4552 kB' 'PageTables: 3500 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 613368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.411 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.411 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.412 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.412 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.413 16:20:20 -- setup/common.sh@33 -- # echo 0 00:04:43.413 16:20:20 -- setup/common.sh@33 -- # return 0 00:04:43.413 16:20:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:43.413 16:20:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.413 16:20:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.413 16:20:20 -- setup/common.sh@18 -- # local node= 00:04:43.413 16:20:20 -- setup/common.sh@19 -- # local var val 00:04:43.413 16:20:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.413 16:20:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.413 16:20:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.413 16:20:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.413 16:20:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.413 16:20:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5190840 kB' 'MemAvailable: 9504036 kB' 'Buffers: 37528 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1202744 kB' 'Inactive: 3365928 kB' 'Active(anon): 138656 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064088 kB' 'Inactive(file): 3364136 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 148140 kB' 'Mapped: 73412 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298704 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91868 kB' 'KernelStack: 4552 kB' 'PageTables: 3500 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 618160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.413 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.413 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.414 16:20:20 -- setup/common.sh@33 -- # echo 0 00:04:43.414 16:20:20 -- setup/common.sh@33 -- # return 0 00:04:43.414 16:20:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:43.414 nr_hugepages=1025 00:04:43.414 resv_hugepages=0 00:04:43.414 16:20:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:43.414 16:20:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.414 surplus_hugepages=0 00:04:43.414 anon_hugepages=0 00:04:43.414 16:20:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.414 16:20:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.414 16:20:20 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:43.414 16:20:20 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:43.414 16:20:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.414 16:20:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.414 16:20:20 -- setup/common.sh@18 -- # local node= 00:04:43.414 16:20:20 -- setup/common.sh@19 -- # local var val 00:04:43.414 16:20:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.414 16:20:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.414 16:20:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.414 16:20:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.414 16:20:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.414 16:20:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.414 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.414 16:20:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5191124 kB' 'MemAvailable: 9504320 kB' 'Buffers: 37528 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1202764 kB' 'Inactive: 3365928 kB' 'Active(anon): 138676 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064088 kB' 'Inactive(file): 3364136 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147764 kB' 'Mapped: 73412 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298704 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91868 kB' 'KernelStack: 4588 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 623000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14324 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.415 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.415 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.416 16:20:20 -- setup/common.sh@33 -- # echo 1025 00:04:43.416 16:20:20 -- setup/common.sh@33 -- # return 0 00:04:43.416 16:20:20 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:43.416 16:20:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.416 16:20:20 -- setup/hugepages.sh@27 -- # local node 00:04:43.416 16:20:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.416 16:20:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:43.416 16:20:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:43.416 16:20:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.416 16:20:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.416 16:20:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.416 16:20:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.416 16:20:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.416 16:20:20 -- setup/common.sh@18 -- # local node=0 00:04:43.416 16:20:20 -- setup/common.sh@19 -- # local var val 00:04:43.416 16:20:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.416 16:20:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.416 16:20:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.416 16:20:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.416 16:20:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.416 16:20:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5191384 kB' 'MemUsed: 7059712 kB' 'Active: 1202504 kB' 'Inactive: 3365928 kB' 'Active(anon): 138416 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064088 kB' 'Inactive(file): 3364136 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'FilePages: 4439040 kB' 'Mapped: 73412 kB' 'AnonPages: 147636 kB' 'Shmem: 2616 kB' 'KernelStack: 4588 kB' 'PageTables: 3444 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206836 kB' 'Slab: 298704 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.416 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.416 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.417 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.417 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.417 16:20:20 -- setup/common.sh@33 -- # echo 0 00:04:43.417 16:20:20 -- setup/common.sh@33 -- # return 0 00:04:43.417 16:20:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.417 16:20:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.417 16:20:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.417 16:20:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.417 node0=1025 expecting 1025 00:04:43.417 16:20:20 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:43.417 16:20:20 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:43.417 00:04:43.417 real 0m0.909s 00:04:43.417 user 0m0.222s 00:04:43.417 sys 0m0.709s 00:04:43.417 16:20:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.417 16:20:20 -- common/autotest_common.sh@10 -- # set +x 00:04:43.417 ************************************ 00:04:43.417 END TEST odd_alloc 00:04:43.417 ************************************ 00:04:43.417 16:20:20 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:43.417 16:20:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.417 16:20:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.417 16:20:20 -- common/autotest_common.sh@10 -- # set +x 00:04:43.417 ************************************ 00:04:43.417 START TEST custom_alloc 00:04:43.418 ************************************ 00:04:43.418 16:20:20 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:43.418 16:20:20 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:43.418 16:20:20 -- setup/hugepages.sh@169 -- # local node 00:04:43.418 16:20:20 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:43.418 16:20:20 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:43.418 16:20:20 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:43.418 16:20:20 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:43.418 16:20:20 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.418 16:20:20 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:43.418 16:20:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.418 16:20:20 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.418 16:20:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:43.418 16:20:20 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:43.418 16:20:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.418 16:20:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.418 16:20:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.418 16:20:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.418 16:20:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.418 16:20:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.418 16:20:20 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:43.418 16:20:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.418 16:20:20 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:43.418 16:20:20 -- setup/hugepages.sh@83 -- # : 0 00:04:43.418 16:20:20 -- setup/hugepages.sh@84 -- # : 0 00:04:43.418 16:20:20 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:43.418 16:20:20 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:43.418 16:20:20 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:43.418 16:20:20 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:43.418 16:20:20 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:43.418 16:20:20 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:43.418 16:20:20 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:43.418 16:20:20 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:43.418 16:20:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.418 16:20:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.418 16:20:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:43.418 16:20:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.418 16:20:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.418 16:20:20 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:43.418 16:20:20 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:43.418 16:20:20 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:43.418 16:20:20 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:43.418 16:20:20 -- setup/hugepages.sh@78 -- # return 0 00:04:43.418 16:20:20 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:43.418 16:20:20 -- setup/hugepages.sh@187 -- # setup output 00:04:43.418 16:20:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.418 16:20:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:43.676 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:43.936 16:20:20 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:43.936 16:20:20 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:43.936 16:20:20 -- setup/hugepages.sh@89 -- # local node 00:04:43.936 16:20:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.936 16:20:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.936 16:20:20 -- setup/hugepages.sh@92 -- # local surp 00:04:43.936 16:20:20 -- setup/hugepages.sh@93 -- # local resv 00:04:43.936 16:20:20 -- setup/hugepages.sh@94 -- # local anon 00:04:43.936 16:20:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.936 16:20:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.936 16:20:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.936 16:20:20 -- setup/common.sh@18 -- # local node= 00:04:43.936 16:20:20 -- setup/common.sh@19 -- # local var val 00:04:43.936 16:20:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:43.936 16:20:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.936 16:20:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.936 16:20:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.936 16:20:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.936 16:20:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 6241660 kB' 'MemAvailable: 10554856 kB' 'Buffers: 37528 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1202380 kB' 'Inactive: 3365920 kB' 'Active(anon): 138284 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064096 kB' 'Inactive(file): 3364128 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 147996 kB' 'Mapped: 73320 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298600 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91764 kB' 'KernelStack: 4552 kB' 'PageTables: 3488 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 614212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.936 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.936 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # continue 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:43.937 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:43.937 16:20:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.937 16:20:20 -- setup/common.sh@33 -- # echo 0 00:04:43.937 16:20:20 -- setup/common.sh@33 -- # return 0 00:04:43.937 16:20:20 -- setup/hugepages.sh@97 -- # anon=0 00:04:44.199 16:20:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.199 16:20:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.199 16:20:20 -- setup/common.sh@18 -- # local node= 00:04:44.199 16:20:20 -- setup/common.sh@19 -- # local var val 00:04:44.199 16:20:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.199 16:20:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.199 16:20:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.199 16:20:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.199 16:20:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.199 16:20:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 6241920 kB' 'MemAvailable: 10555116 kB' 'Buffers: 37528 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1202640 kB' 'Inactive: 3365920 kB' 'Active(anon): 138544 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064096 kB' 'Inactive(file): 3364128 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 148256 kB' 'Mapped: 73320 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298600 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91764 kB' 'KernelStack: 4552 kB' 'PageTables: 3488 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 619924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.199 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.199 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.200 16:20:20 -- setup/common.sh@33 -- # echo 0 00:04:44.200 16:20:20 -- setup/common.sh@33 -- # return 0 00:04:44.200 16:20:20 -- setup/hugepages.sh@99 -- # surp=0 00:04:44.200 16:20:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.200 16:20:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.200 16:20:20 -- setup/common.sh@18 -- # local node= 00:04:44.200 16:20:20 -- setup/common.sh@19 -- # local var val 00:04:44.200 16:20:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.200 16:20:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.200 16:20:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.200 16:20:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.200 16:20:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.200 16:20:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 6242180 kB' 'MemAvailable: 10555376 kB' 'Buffers: 37528 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1202640 kB' 'Inactive: 3365920 kB' 'Active(anon): 138544 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064096 kB' 'Inactive(file): 3364128 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 148128 kB' 'Mapped: 73320 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298600 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91764 kB' 'KernelStack: 4552 kB' 'PageTables: 3488 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 619924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.200 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.200 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.201 16:20:20 -- setup/common.sh@33 -- # echo 0 00:04:44.201 16:20:20 -- setup/common.sh@33 -- # return 0 00:04:44.201 16:20:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:44.201 16:20:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:44.201 nr_hugepages=512 00:04:44.201 16:20:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.201 resv_hugepages=0 00:04:44.201 surplus_hugepages=0 00:04:44.201 16:20:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.201 anon_hugepages=0 00:04:44.201 16:20:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.201 16:20:20 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:44.201 16:20:20 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:44.201 16:20:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.201 16:20:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.201 16:20:20 -- setup/common.sh@18 -- # local node= 00:04:44.201 16:20:20 -- setup/common.sh@19 -- # local var val 00:04:44.201 16:20:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.201 16:20:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.201 16:20:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.201 16:20:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.201 16:20:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.201 16:20:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 6242132 kB' 'MemAvailable: 10555328 kB' 'Buffers: 37528 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1202604 kB' 'Inactive: 3365920 kB' 'Active(anon): 138508 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064096 kB' 'Inactive(file): 3364128 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 148336 kB' 'Mapped: 73320 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298600 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91764 kB' 'KernelStack: 4588 kB' 'PageTables: 3440 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 614224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14372 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.201 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.201 16:20:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.202 16:20:20 -- setup/common.sh@33 -- # echo 512 00:04:44.202 16:20:20 -- setup/common.sh@33 -- # return 0 00:04:44.202 16:20:20 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:44.202 16:20:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.202 16:20:20 -- setup/hugepages.sh@27 -- # local node 00:04:44.202 16:20:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.202 16:20:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.202 16:20:20 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:44.202 16:20:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.202 16:20:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.202 16:20:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.202 16:20:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.202 16:20:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.202 16:20:20 -- setup/common.sh@18 -- # local node=0 00:04:44.202 16:20:20 -- setup/common.sh@19 -- # local var val 00:04:44.202 16:20:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:44.202 16:20:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.202 16:20:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.202 16:20:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.202 16:20:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.202 16:20:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.202 16:20:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 6242392 kB' 'MemUsed: 6008704 kB' 'Active: 1202464 kB' 'Inactive: 3365920 kB' 'Active(anon): 138368 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064096 kB' 'Inactive(file): 3364128 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'FilePages: 4439040 kB' 'Mapped: 73320 kB' 'AnonPages: 148196 kB' 'Shmem: 2616 kB' 'KernelStack: 4640 kB' 'PageTables: 3412 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206836 kB' 'Slab: 298600 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.202 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.202 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # continue 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:44.203 16:20:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:44.203 16:20:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.203 16:20:20 -- setup/common.sh@33 -- # echo 0 00:04:44.203 16:20:20 -- setup/common.sh@33 -- # return 0 00:04:44.203 16:20:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.203 16:20:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.203 16:20:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.203 16:20:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.203 node0=512 expecting 512 00:04:44.203 16:20:20 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.203 16:20:20 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:44.203 00:04:44.203 real 0m0.656s 00:04:44.203 user 0m0.236s 00:04:44.203 sys 0m0.442s 00:04:44.204 16:20:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.204 ************************************ 00:04:44.204 END TEST custom_alloc 00:04:44.204 16:20:20 -- common/autotest_common.sh@10 -- # set +x 00:04:44.204 ************************************ 00:04:44.204 16:20:20 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:44.204 16:20:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.204 16:20:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.204 16:20:20 -- common/autotest_common.sh@10 -- # set +x 00:04:44.204 ************************************ 00:04:44.204 START TEST no_shrink_alloc 00:04:44.204 ************************************ 00:04:44.204 16:20:20 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:44.204 16:20:20 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:44.204 16:20:20 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.204 16:20:20 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:44.204 16:20:20 -- setup/hugepages.sh@51 -- # shift 00:04:44.204 16:20:20 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:44.204 16:20:20 -- setup/hugepages.sh@52 -- # local node_ids 00:04:44.204 16:20:20 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.204 16:20:20 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.204 16:20:20 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:44.204 16:20:20 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:44.204 16:20:20 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.204 16:20:20 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.204 16:20:20 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:44.204 16:20:20 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.204 16:20:20 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.204 16:20:20 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:44.204 16:20:20 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:44.204 16:20:20 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:44.204 16:20:20 -- setup/hugepages.sh@73 -- # return 0 00:04:44.204 16:20:20 -- setup/hugepages.sh@198 -- # setup output 00:04:44.204 16:20:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.204 16:20:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:44.462 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.040 16:20:21 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:45.040 16:20:21 -- setup/hugepages.sh@89 -- # local node 00:04:45.040 16:20:21 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.040 16:20:21 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.040 16:20:21 -- setup/hugepages.sh@92 -- # local surp 00:04:45.040 16:20:21 -- setup/hugepages.sh@93 -- # local resv 00:04:45.040 16:20:21 -- setup/hugepages.sh@94 -- # local anon 00:04:45.040 16:20:21 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.040 16:20:21 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.040 16:20:21 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.040 16:20:21 -- setup/common.sh@18 -- # local node= 00:04:45.040 16:20:21 -- setup/common.sh@19 -- # local var val 00:04:45.040 16:20:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.040 16:20:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.040 16:20:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.040 16:20:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.040 16:20:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.040 16:20:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.040 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.040 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.040 16:20:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5207288 kB' 'MemAvailable: 9520492 kB' 'Buffers: 37536 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1188924 kB' 'Inactive: 3365916 kB' 'Active(anon): 124816 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064108 kB' 'Inactive(file): 3364124 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'AnonPages: 134540 kB' 'Mapped: 72392 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298024 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91188 kB' 'KernelStack: 4272 kB' 'PageTables: 2976 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 580128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14068 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:45.040 16:20:21 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.040 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.040 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.040 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.040 16:20:21 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.040 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.040 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.040 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.041 16:20:21 -- setup/common.sh@33 -- # echo 0 00:04:45.041 16:20:21 -- setup/common.sh@33 -- # return 0 00:04:45.041 16:20:21 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.041 16:20:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.041 16:20:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.041 16:20:21 -- setup/common.sh@18 -- # local node= 00:04:45.041 16:20:21 -- setup/common.sh@19 -- # local var val 00:04:45.041 16:20:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.041 16:20:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.041 16:20:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.041 16:20:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.041 16:20:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.041 16:20:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.041 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.041 16:20:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5207028 kB' 'MemAvailable: 9520232 kB' 'Buffers: 37536 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1189184 kB' 'Inactive: 3365916 kB' 'Active(anon): 125076 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064108 kB' 'Inactive(file): 3364124 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'AnonPages: 134412 kB' 'Mapped: 72392 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298024 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91188 kB' 'KernelStack: 4272 kB' 'PageTables: 2976 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 585168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14084 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.041 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.042 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.042 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.043 16:20:21 -- setup/common.sh@33 -- # echo 0 00:04:45.043 16:20:21 -- setup/common.sh@33 -- # return 0 00:04:45.043 16:20:21 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.043 16:20:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.043 16:20:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.043 16:20:21 -- setup/common.sh@18 -- # local node= 00:04:45.043 16:20:21 -- setup/common.sh@19 -- # local var val 00:04:45.043 16:20:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.043 16:20:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.043 16:20:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.043 16:20:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.043 16:20:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.043 16:20:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5207264 kB' 'MemAvailable: 9520468 kB' 'Buffers: 37536 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1189316 kB' 'Inactive: 3365916 kB' 'Active(anon): 125208 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064108 kB' 'Inactive(file): 3364124 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'AnonPages: 134780 kB' 'Mapped: 72364 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298020 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91184 kB' 'KernelStack: 4272 kB' 'PageTables: 2964 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 585168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.043 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.043 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.044 16:20:21 -- setup/common.sh@33 -- # echo 0 00:04:45.044 16:20:21 -- setup/common.sh@33 -- # return 0 00:04:45.044 16:20:21 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.044 16:20:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.044 nr_hugepages=1024 00:04:45.044 16:20:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.044 resv_hugepages=0 00:04:45.044 surplus_hugepages=0 00:04:45.044 anon_hugepages=0 00:04:45.044 16:20:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.044 16:20:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.044 16:20:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.044 16:20:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.044 16:20:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.044 16:20:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.044 16:20:21 -- setup/common.sh@18 -- # local node= 00:04:45.044 16:20:21 -- setup/common.sh@19 -- # local var val 00:04:45.044 16:20:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.044 16:20:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.044 16:20:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.044 16:20:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.044 16:20:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.044 16:20:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5207272 kB' 'MemAvailable: 9520476 kB' 'Buffers: 37536 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1189412 kB' 'Inactive: 3365916 kB' 'Active(anon): 125304 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064108 kB' 'Inactive(file): 3364124 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'AnonPages: 134472 kB' 'Mapped: 72364 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298020 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91184 kB' 'KernelStack: 4324 kB' 'PageTables: 2936 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 590008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.044 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.044 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.045 16:20:21 -- setup/common.sh@33 -- # echo 1024 00:04:45.045 16:20:21 -- setup/common.sh@33 -- # return 0 00:04:45.045 16:20:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.045 16:20:21 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.045 16:20:21 -- setup/hugepages.sh@27 -- # local node 00:04:45.045 16:20:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.045 16:20:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.045 16:20:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.045 16:20:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.045 16:20:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.045 16:20:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.045 16:20:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.045 16:20:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.045 16:20:21 -- setup/common.sh@18 -- # local node=0 00:04:45.045 16:20:21 -- setup/common.sh@19 -- # local var val 00:04:45.045 16:20:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.045 16:20:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.045 16:20:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.045 16:20:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.045 16:20:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.045 16:20:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.045 16:20:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5206972 kB' 'MemUsed: 7044124 kB' 'Active: 1189672 kB' 'Inactive: 3365916 kB' 'Active(anon): 125564 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064108 kB' 'Inactive(file): 3364124 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'FilePages: 4439048 kB' 'Mapped: 72364 kB' 'AnonPages: 134992 kB' 'Shmem: 2616 kB' 'KernelStack: 4392 kB' 'PageTables: 2936 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206836 kB' 'Slab: 298020 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.045 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.045 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # continue 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.046 16:20:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.046 16:20:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.046 16:20:21 -- setup/common.sh@33 -- # echo 0 00:04:45.046 16:20:21 -- setup/common.sh@33 -- # return 0 00:04:45.046 16:20:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.046 16:20:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.046 16:20:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.046 16:20:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.046 16:20:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.046 node0=1024 expecting 1024 00:04:45.046 16:20:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.046 16:20:21 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:45.046 16:20:21 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:45.046 16:20:21 -- setup/hugepages.sh@202 -- # setup output 00:04:45.046 16:20:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.046 16:20:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:45.306 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:45.306 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:45.306 16:20:22 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:45.306 16:20:22 -- setup/hugepages.sh@89 -- # local node 00:04:45.306 16:20:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:45.306 16:20:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:45.306 16:20:22 -- setup/hugepages.sh@92 -- # local surp 00:04:45.306 16:20:22 -- setup/hugepages.sh@93 -- # local resv 00:04:45.306 16:20:22 -- setup/hugepages.sh@94 -- # local anon 00:04:45.306 16:20:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:45.306 16:20:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:45.306 16:20:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:45.306 16:20:22 -- setup/common.sh@18 -- # local node= 00:04:45.306 16:20:22 -- setup/common.sh@19 -- # local var val 00:04:45.306 16:20:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.306 16:20:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.306 16:20:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.306 16:20:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.306 16:20:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.306 16:20:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5206916 kB' 'MemAvailable: 9520120 kB' 'Buffers: 37536 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1189540 kB' 'Inactive: 3365916 kB' 'Active(anon): 125432 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064108 kB' 'Inactive(file): 3364124 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'AnonPages: 135104 kB' 'Mapped: 72652 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298408 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91572 kB' 'KernelStack: 4344 kB' 'PageTables: 2752 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 587140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14084 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.306 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.306 16:20:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:45.307 16:20:22 -- setup/common.sh@33 -- # echo 0 00:04:45.307 16:20:22 -- setup/common.sh@33 -- # return 0 00:04:45.307 16:20:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:45.307 16:20:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:45.307 16:20:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.307 16:20:22 -- setup/common.sh@18 -- # local node= 00:04:45.307 16:20:22 -- setup/common.sh@19 -- # local var val 00:04:45.307 16:20:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.307 16:20:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.307 16:20:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.307 16:20:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.307 16:20:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.307 16:20:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5207176 kB' 'MemAvailable: 9520380 kB' 'Buffers: 37536 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1189800 kB' 'Inactive: 3365916 kB' 'Active(anon): 125692 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064108 kB' 'Inactive(file): 3364124 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'AnonPages: 134716 kB' 'Mapped: 72652 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298408 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91572 kB' 'KernelStack: 4344 kB' 'PageTables: 2752 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 587140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.307 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.307 16:20:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.308 16:20:22 -- setup/common.sh@33 -- # echo 0 00:04:45.308 16:20:22 -- setup/common.sh@33 -- # return 0 00:04:45.308 16:20:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:45.308 16:20:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:45.308 16:20:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:45.308 16:20:22 -- setup/common.sh@18 -- # local node= 00:04:45.308 16:20:22 -- setup/common.sh@19 -- # local var val 00:04:45.308 16:20:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.308 16:20:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.308 16:20:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.308 16:20:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.308 16:20:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.308 16:20:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5207484 kB' 'MemAvailable: 9520688 kB' 'Buffers: 37536 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1189584 kB' 'Inactive: 3365916 kB' 'Active(anon): 125476 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064108 kB' 'Inactive(file): 3364124 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'AnonPages: 134688 kB' 'Mapped: 72604 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298408 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91572 kB' 'KernelStack: 4272 kB' 'PageTables: 2772 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 591932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.308 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.308 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.309 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.309 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.569 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.569 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:45.570 16:20:22 -- setup/common.sh@33 -- # echo 0 00:04:45.570 16:20:22 -- setup/common.sh@33 -- # return 0 00:04:45.570 16:20:22 -- setup/hugepages.sh@100 -- # resv=0 00:04:45.570 16:20:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:45.570 nr_hugepages=1024 00:04:45.570 resv_hugepages=0 00:04:45.570 16:20:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:45.570 surplus_hugepages=0 00:04:45.570 anon_hugepages=0 00:04:45.570 16:20:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:45.570 16:20:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:45.570 16:20:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.570 16:20:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:45.570 16:20:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:45.570 16:20:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:45.570 16:20:22 -- setup/common.sh@18 -- # local node= 00:04:45.570 16:20:22 -- setup/common.sh@19 -- # local var val 00:04:45.570 16:20:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.570 16:20:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.570 16:20:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:45.570 16:20:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:45.570 16:20:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.570 16:20:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5207816 kB' 'MemAvailable: 9521020 kB' 'Buffers: 37536 kB' 'Cached: 4401512 kB' 'SwapCached: 0 kB' 'Active: 1189232 kB' 'Inactive: 3365916 kB' 'Active(anon): 125124 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064108 kB' 'Inactive(file): 3364124 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'AnonPages: 134792 kB' 'Mapped: 72572 kB' 'Shmem: 2616 kB' 'KReclaimable: 206836 kB' 'Slab: 298424 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91588 kB' 'KernelStack: 4340 kB' 'PageTables: 2676 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 592156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14100 kB' 'VmallocChunk: 0 kB' 'Percpu: 8640 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 4036608 kB' 'DirectMap1G: 10485760 kB' 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.570 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.570 16:20:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:45.571 16:20:22 -- setup/common.sh@33 -- # echo 1024 00:04:45.571 16:20:22 -- setup/common.sh@33 -- # return 0 00:04:45.571 16:20:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:45.571 16:20:22 -- setup/hugepages.sh@112 -- # get_nodes 00:04:45.571 16:20:22 -- setup/hugepages.sh@27 -- # local node 00:04:45.571 16:20:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:45.571 16:20:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:45.571 16:20:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:45.571 16:20:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:45.571 16:20:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.571 16:20:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.571 16:20:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:45.571 16:20:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.571 16:20:22 -- setup/common.sh@18 -- # local node=0 00:04:45.571 16:20:22 -- setup/common.sh@19 -- # local var val 00:04:45.571 16:20:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:45.571 16:20:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.571 16:20:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:45.571 16:20:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:45.571 16:20:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.571 16:20:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5207580 kB' 'MemUsed: 7043516 kB' 'Active: 1189232 kB' 'Inactive: 3365916 kB' 'Active(anon): 125124 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1064108 kB' 'Inactive(file): 3364124 kB' 'Unevictable: 18536 kB' 'Mlocked: 18536 kB' 'Dirty: 704 kB' 'Writeback: 0 kB' 'FilePages: 4439048 kB' 'Mapped: 72572 kB' 'AnonPages: 134796 kB' 'Shmem: 2616 kB' 'KernelStack: 4408 kB' 'PageTables: 2676 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 206836 kB' 'Slab: 298424 kB' 'SReclaimable: 206836 kB' 'SUnreclaim: 91588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.571 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.571 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # continue 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:45.572 16:20:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:45.572 16:20:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.572 16:20:22 -- setup/common.sh@33 -- # echo 0 00:04:45.572 16:20:22 -- setup/common.sh@33 -- # return 0 00:04:45.572 16:20:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.572 16:20:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.572 16:20:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.572 16:20:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.572 16:20:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:45.572 node0=1024 expecting 1024 00:04:45.572 16:20:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:45.572 00:04:45.572 real 0m1.301s 00:04:45.572 user 0m0.526s 00:04:45.572 sys 0m0.834s 00:04:45.572 16:20:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.572 ************************************ 00:04:45.572 16:20:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.572 END TEST no_shrink_alloc 00:04:45.572 ************************************ 00:04:45.572 16:20:22 -- setup/hugepages.sh@217 -- # clear_hp 00:04:45.572 16:20:22 -- setup/hugepages.sh@37 -- # local node hp 00:04:45.572 16:20:22 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:45.572 16:20:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.572 16:20:22 -- setup/hugepages.sh@41 -- # echo 0 00:04:45.572 16:20:22 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:45.572 16:20:22 -- setup/hugepages.sh@41 -- # echo 0 00:04:45.572 16:20:22 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:45.572 16:20:22 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:45.572 00:04:45.572 real 0m5.905s 00:04:45.572 user 0m1.996s 00:04:45.572 sys 0m4.002s 00:04:45.572 16:20:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.572 ************************************ 00:04:45.572 END TEST hugepages 00:04:45.572 ************************************ 00:04:45.572 16:20:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.572 16:20:22 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:45.572 16:20:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.572 16:20:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.572 16:20:22 -- common/autotest_common.sh@10 -- # set +x 00:04:45.572 ************************************ 00:04:45.572 START TEST driver 00:04:45.572 ************************************ 00:04:45.572 16:20:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:45.572 * Looking for test storage... 00:04:45.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:45.572 16:20:22 -- setup/driver.sh@68 -- # setup reset 00:04:45.572 16:20:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:45.572 16:20:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.139 16:20:22 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:46.139 16:20:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.139 16:20:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.139 16:20:22 -- common/autotest_common.sh@10 -- # set +x 00:04:46.139 ************************************ 00:04:46.139 START TEST guess_driver 00:04:46.139 ************************************ 00:04:46.139 16:20:22 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:46.139 16:20:22 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:46.139 16:20:22 -- setup/driver.sh@47 -- # local fail=0 00:04:46.139 16:20:22 -- setup/driver.sh@49 -- # pick_driver 00:04:46.139 16:20:22 -- setup/driver.sh@36 -- # vfio 00:04:46.139 16:20:22 -- setup/driver.sh@21 -- # local iommu_grups 00:04:46.139 16:20:22 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:46.139 16:20:22 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:46.139 16:20:22 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:46.139 16:20:22 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:46.139 16:20:22 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:46.139 16:20:22 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:46.139 16:20:22 -- setup/driver.sh@32 -- # return 1 00:04:46.139 16:20:22 -- setup/driver.sh@38 -- # uio 00:04:46.139 16:20:22 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:46.139 16:20:22 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:46.139 16:20:22 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:46.139 16:20:22 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:46.139 16:20:22 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:04:46.139 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:46.139 16:20:22 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:46.139 16:20:22 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:46.139 16:20:22 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:46.140 Looking for driver=uio_pci_generic 00:04:46.140 16:20:22 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:46.140 16:20:22 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.140 16:20:22 -- setup/driver.sh@45 -- # setup output config 00:04:46.140 16:20:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.140 16:20:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.398 16:20:23 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:46.398 16:20:23 -- setup/driver.sh@58 -- # continue 00:04:46.398 16:20:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.398 16:20:23 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:46.398 16:20:23 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:46.398 16:20:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.774 16:20:24 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:47.774 16:20:24 -- setup/driver.sh@65 -- # setup reset 00:04:47.774 16:20:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.774 16:20:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.033 00:04:48.033 real 0m1.911s 00:04:48.033 user 0m0.460s 00:04:48.033 sys 0m1.427s 00:04:48.033 ************************************ 00:04:48.033 END TEST guess_driver 00:04:48.033 ************************************ 00:04:48.033 16:20:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.033 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:04:48.033 00:04:48.033 real 0m2.429s 00:04:48.033 user 0m0.766s 00:04:48.033 sys 0m1.649s 00:04:48.033 16:20:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.033 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:04:48.033 ************************************ 00:04:48.033 END TEST driver 00:04:48.033 ************************************ 00:04:48.033 16:20:24 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:48.033 16:20:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.033 16:20:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.033 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:04:48.033 ************************************ 00:04:48.033 START TEST devices 00:04:48.033 ************************************ 00:04:48.033 16:20:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:48.033 * Looking for test storage... 00:04:48.033 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:48.033 16:20:24 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:48.033 16:20:24 -- setup/devices.sh@192 -- # setup reset 00:04:48.033 16:20:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:48.033 16:20:24 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.601 16:20:25 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:48.601 16:20:25 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:48.601 16:20:25 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:48.601 16:20:25 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:48.601 16:20:25 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:48.601 16:20:25 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:48.601 16:20:25 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:48.601 16:20:25 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:48.601 16:20:25 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:48.601 16:20:25 -- setup/devices.sh@196 -- # blocks=() 00:04:48.601 16:20:25 -- setup/devices.sh@196 -- # declare -a blocks 00:04:48.601 16:20:25 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:48.601 16:20:25 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:48.601 16:20:25 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:48.601 16:20:25 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:48.601 16:20:25 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:48.601 16:20:25 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:48.601 16:20:25 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:48.601 16:20:25 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:48.601 16:20:25 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:48.601 16:20:25 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:48.601 16:20:25 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:48.601 No valid GPT data, bailing 00:04:48.601 16:20:25 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:48.601 16:20:25 -- scripts/common.sh@393 -- # pt= 00:04:48.601 16:20:25 -- scripts/common.sh@394 -- # return 1 00:04:48.601 16:20:25 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:48.601 16:20:25 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:48.601 16:20:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:48.601 16:20:25 -- setup/common.sh@80 -- # echo 5368709120 00:04:48.601 16:20:25 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:48.601 16:20:25 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:48.601 16:20:25 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:48.601 16:20:25 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:48.601 16:20:25 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:48.601 16:20:25 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:48.601 16:20:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.601 16:20:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.601 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:04:48.601 ************************************ 00:04:48.601 START TEST nvme_mount 00:04:48.601 ************************************ 00:04:48.601 16:20:25 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:48.601 16:20:25 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:48.601 16:20:25 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:48.601 16:20:25 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.601 16:20:25 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.601 16:20:25 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:48.601 16:20:25 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:48.601 16:20:25 -- setup/common.sh@40 -- # local part_no=1 00:04:48.601 16:20:25 -- setup/common.sh@41 -- # local size=1073741824 00:04:48.601 16:20:25 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:48.601 16:20:25 -- setup/common.sh@44 -- # parts=() 00:04:48.601 16:20:25 -- setup/common.sh@44 -- # local parts 00:04:48.601 16:20:25 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:48.601 16:20:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:48.601 16:20:25 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:48.601 16:20:25 -- setup/common.sh@46 -- # (( part++ )) 00:04:48.601 16:20:25 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:48.601 16:20:25 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:48.601 16:20:25 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:48.601 16:20:25 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:49.975 Creating new GPT entries in memory. 00:04:49.975 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:49.975 other utilities. 00:04:49.976 16:20:26 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:49.976 16:20:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.976 16:20:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:49.976 16:20:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:49.976 16:20:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:50.924 Creating new GPT entries in memory. 00:04:50.924 The operation has completed successfully. 00:04:50.924 16:20:27 -- setup/common.sh@57 -- # (( part++ )) 00:04:50.924 16:20:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.924 16:20:27 -- setup/common.sh@62 -- # wait 98265 00:04:50.924 16:20:27 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:50.924 16:20:27 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:50.924 16:20:27 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:50.924 16:20:27 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:50.924 16:20:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:50.924 16:20:27 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:50.924 16:20:27 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:50.924 16:20:27 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:50.924 16:20:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:50.924 16:20:27 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:50.924 16:20:27 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:50.924 16:20:27 -- setup/devices.sh@53 -- # local found=0 00:04:50.924 16:20:27 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:50.924 16:20:27 -- setup/devices.sh@56 -- # : 00:04:50.924 16:20:27 -- setup/devices.sh@59 -- # local pci status 00:04:50.924 16:20:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.924 16:20:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:50.924 16:20:27 -- setup/devices.sh@47 -- # setup output config 00:04:50.924 16:20:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.924 16:20:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.924 16:20:27 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:50.924 16:20:27 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:50.924 16:20:27 -- setup/devices.sh@63 -- # found=1 00:04:50.924 16:20:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.924 16:20:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:50.924 16:20:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.182 16:20:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:51.182 16:20:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.134 16:20:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.134 16:20:28 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:52.134 16:20:28 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.134 16:20:28 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.134 16:20:28 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.134 16:20:28 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:52.134 16:20:28 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.134 16:20:28 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.134 16:20:28 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.134 16:20:28 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.134 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.134 16:20:28 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.134 16:20:28 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.391 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:52.391 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:52.391 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:52.391 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:52.391 16:20:28 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:52.391 16:20:28 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:52.392 16:20:28 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.392 16:20:28 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:52.392 16:20:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:52.392 16:20:28 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.392 16:20:28 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.392 16:20:28 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:52.392 16:20:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:52.392 16:20:28 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.392 16:20:28 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.392 16:20:28 -- setup/devices.sh@53 -- # local found=0 00:04:52.392 16:20:28 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.392 16:20:28 -- setup/devices.sh@56 -- # : 00:04:52.392 16:20:28 -- setup/devices.sh@59 -- # local pci status 00:04:52.392 16:20:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.392 16:20:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:52.392 16:20:28 -- setup/devices.sh@47 -- # setup output config 00:04:52.392 16:20:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.392 16:20:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:52.392 16:20:29 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:52.392 16:20:29 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:52.392 16:20:29 -- setup/devices.sh@63 -- # found=1 00:04:52.392 16:20:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.392 16:20:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:52.392 16:20:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.649 16:20:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:52.649 16:20:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.583 16:20:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.583 16:20:30 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:53.583 16:20:30 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.583 16:20:30 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.583 16:20:30 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:53.583 16:20:30 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:53.583 16:20:30 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:53.583 16:20:30 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:53.583 16:20:30 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:53.583 16:20:30 -- setup/devices.sh@50 -- # local mount_point= 00:04:53.583 16:20:30 -- setup/devices.sh@51 -- # local test_file= 00:04:53.583 16:20:30 -- setup/devices.sh@53 -- # local found=0 00:04:53.583 16:20:30 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:53.583 16:20:30 -- setup/devices.sh@59 -- # local pci status 00:04:53.583 16:20:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.583 16:20:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:53.583 16:20:30 -- setup/devices.sh@47 -- # setup output config 00:04:53.583 16:20:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.584 16:20:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:53.842 16:20:30 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.842 16:20:30 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:53.842 16:20:30 -- setup/devices.sh@63 -- # found=1 00:04:53.842 16:20:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.842 16:20:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.842 16:20:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.842 16:20:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:53.842 16:20:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.221 16:20:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.221 16:20:31 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:55.221 16:20:31 -- setup/devices.sh@68 -- # return 0 00:04:55.221 16:20:31 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:55.221 16:20:31 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.221 16:20:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.221 16:20:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.221 16:20:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.221 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.221 00:04:55.221 real 0m6.434s 00:04:55.221 user 0m0.690s 00:04:55.221 sys 0m3.598s 00:04:55.221 16:20:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.221 16:20:31 -- common/autotest_common.sh@10 -- # set +x 00:04:55.221 ************************************ 00:04:55.221 END TEST nvme_mount 00:04:55.221 ************************************ 00:04:55.221 16:20:31 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:55.221 16:20:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.221 16:20:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.221 16:20:31 -- common/autotest_common.sh@10 -- # set +x 00:04:55.221 ************************************ 00:04:55.221 START TEST dm_mount 00:04:55.221 ************************************ 00:04:55.221 16:20:31 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:55.221 16:20:31 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:55.221 16:20:31 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:55.221 16:20:31 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:55.221 16:20:31 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:55.221 16:20:31 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:55.221 16:20:31 -- setup/common.sh@40 -- # local part_no=2 00:04:55.221 16:20:31 -- setup/common.sh@41 -- # local size=1073741824 00:04:55.221 16:20:31 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:55.221 16:20:31 -- setup/common.sh@44 -- # parts=() 00:04:55.221 16:20:31 -- setup/common.sh@44 -- # local parts 00:04:55.221 16:20:31 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:55.221 16:20:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.221 16:20:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.221 16:20:31 -- setup/common.sh@46 -- # (( part++ )) 00:04:55.221 16:20:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.221 16:20:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.221 16:20:31 -- setup/common.sh@46 -- # (( part++ )) 00:04:55.221 16:20:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.221 16:20:31 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:55.221 16:20:31 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:55.221 16:20:31 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:56.173 Creating new GPT entries in memory. 00:04:56.173 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:56.173 other utilities. 00:04:56.173 16:20:32 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:56.173 16:20:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.173 16:20:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.173 16:20:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.173 16:20:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:57.105 Creating new GPT entries in memory. 00:04:57.105 The operation has completed successfully. 00:04:57.105 16:20:33 -- setup/common.sh@57 -- # (( part++ )) 00:04:57.105 16:20:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.105 16:20:33 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.105 16:20:33 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.105 16:20:33 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:58.482 The operation has completed successfully. 00:04:58.482 16:20:34 -- setup/common.sh@57 -- # (( part++ )) 00:04:58.482 16:20:34 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.482 16:20:34 -- setup/common.sh@62 -- # wait 98781 00:04:58.482 16:20:34 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:58.482 16:20:34 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:58.482 16:20:34 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:58.482 16:20:34 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:58.482 16:20:34 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:58.482 16:20:34 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.482 16:20:34 -- setup/devices.sh@161 -- # break 00:04:58.482 16:20:34 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.482 16:20:34 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:58.482 16:20:34 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:58.482 16:20:34 -- setup/devices.sh@166 -- # dm=dm-0 00:04:58.482 16:20:34 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:58.482 16:20:34 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:58.482 16:20:34 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:58.482 16:20:34 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:58.482 16:20:34 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:58.482 16:20:34 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.482 16:20:34 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:58.482 16:20:34 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:58.482 16:20:35 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:58.482 16:20:35 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:58.482 16:20:35 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:58.482 16:20:35 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:58.482 16:20:35 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:58.482 16:20:35 -- setup/devices.sh@53 -- # local found=0 00:04:58.482 16:20:35 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.482 16:20:35 -- setup/devices.sh@56 -- # : 00:04:58.482 16:20:35 -- setup/devices.sh@59 -- # local pci status 00:04:58.482 16:20:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.482 16:20:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:58.482 16:20:35 -- setup/devices.sh@47 -- # setup output config 00:04:58.482 16:20:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.482 16:20:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.482 16:20:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.482 16:20:35 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:58.482 16:20:35 -- setup/devices.sh@63 -- # found=1 00:04:58.482 16:20:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.482 16:20:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.482 16:20:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.482 16:20:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:58.482 16:20:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.858 16:20:36 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.858 16:20:36 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:59.858 16:20:36 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.858 16:20:36 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.858 16:20:36 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:59.858 16:20:36 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:59.858 16:20:36 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:59.858 16:20:36 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:59.858 16:20:36 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:59.858 16:20:36 -- setup/devices.sh@50 -- # local mount_point= 00:04:59.858 16:20:36 -- setup/devices.sh@51 -- # local test_file= 00:04:59.858 16:20:36 -- setup/devices.sh@53 -- # local found=0 00:04:59.858 16:20:36 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:59.858 16:20:36 -- setup/devices.sh@59 -- # local pci status 00:04:59.858 16:20:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.858 16:20:36 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:59.858 16:20:36 -- setup/devices.sh@47 -- # setup output config 00:04:59.858 16:20:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.858 16:20:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:59.858 16:20:36 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.858 16:20:36 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:59.858 16:20:36 -- setup/devices.sh@63 -- # found=1 00:04:59.858 16:20:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.858 16:20:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:59.858 16:20:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.117 16:20:36 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:00.117 16:20:36 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.054 16:20:37 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.054 16:20:37 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.054 16:20:37 -- setup/devices.sh@68 -- # return 0 00:05:01.054 16:20:37 -- setup/devices.sh@187 -- # cleanup_dm 00:05:01.054 16:20:37 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.054 16:20:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.054 16:20:37 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:01.054 16:20:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.054 16:20:37 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:01.054 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.054 16:20:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.054 16:20:37 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:01.054 00:05:01.054 real 0m6.090s 00:05:01.054 user 0m0.476s 00:05:01.054 sys 0m2.454s 00:05:01.054 16:20:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.054 16:20:37 -- common/autotest_common.sh@10 -- # set +x 00:05:01.054 ************************************ 00:05:01.054 END TEST dm_mount 00:05:01.054 ************************************ 00:05:01.313 16:20:37 -- setup/devices.sh@1 -- # cleanup 00:05:01.313 16:20:37 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:01.313 16:20:37 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:01.313 16:20:37 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.313 16:20:37 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:01.313 16:20:37 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.313 16:20:37 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.313 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.313 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:01.313 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.313 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.313 16:20:37 -- setup/devices.sh@12 -- # cleanup_dm 00:05:01.313 16:20:37 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:01.313 16:20:37 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.313 16:20:37 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.313 16:20:37 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.313 16:20:37 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.313 16:20:37 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.313 00:05:01.313 real 0m13.285s 00:05:01.313 user 0m1.571s 00:05:01.313 sys 0m6.358s 00:05:01.313 16:20:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.313 16:20:37 -- common/autotest_common.sh@10 -- # set +x 00:05:01.313 ************************************ 00:05:01.313 END TEST devices 00:05:01.313 ************************************ 00:05:01.313 00:05:01.313 real 0m26.607s 00:05:01.313 user 0m6.043s 00:05:01.313 sys 0m15.322s 00:05:01.313 16:20:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.313 16:20:38 -- common/autotest_common.sh@10 -- # set +x 00:05:01.313 ************************************ 00:05:01.313 END TEST setup.sh 00:05:01.313 ************************************ 00:05:01.313 16:20:38 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:01.572 Hugepages 00:05:01.572 node hugesize free / total 00:05:01.572 node0 1048576kB 0 / 0 00:05:01.572 node0 2048kB 2048 / 2048 00:05:01.572 00:05:01.572 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.572 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:01.916 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:01.916 16:20:38 -- spdk/autotest.sh@141 -- # uname -s 00:05:01.916 16:20:38 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:01.916 16:20:38 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:01.916 16:20:38 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:02.192 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.567 16:20:39 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:04.504 16:20:41 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:04.504 16:20:41 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:04.504 16:20:41 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:04.504 16:20:41 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:04.504 16:20:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:04.504 16:20:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:04.504 16:20:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.504 16:20:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:04.504 16:20:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:04.504 16:20:41 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:04.504 16:20:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:04.504 16:20:41 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:04.763 Waiting for block devices as requested 00:05:04.763 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.763 16:20:41 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:04.763 16:20:41 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:04.763 16:20:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:04.763 16:20:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:04.763 16:20:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:04.763 16:20:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:04.763 16:20:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:04.763 16:20:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:04.763 16:20:41 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:04.763 16:20:41 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:04.763 16:20:41 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:04.763 16:20:41 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:04.763 16:20:41 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:04.763 16:20:41 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:04.763 16:20:41 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:04.763 16:20:41 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:04.763 16:20:41 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:04.763 16:20:41 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:04.763 16:20:41 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:04.763 16:20:41 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:04.763 16:20:41 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:04.763 16:20:41 -- common/autotest_common.sh@1542 -- # continue 00:05:04.763 16:20:41 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:04.763 16:20:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:04.763 16:20:41 -- common/autotest_common.sh@10 -- # set +x 00:05:05.022 16:20:41 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:05.022 16:20:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:05.022 16:20:41 -- common/autotest_common.sh@10 -- # set +x 00:05:05.022 16:20:41 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.281 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:05.281 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:06.660 16:20:43 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:06.660 16:20:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:06.660 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:05:06.660 16:20:43 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:06.660 16:20:43 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:06.660 16:20:43 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:06.660 16:20:43 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:06.660 16:20:43 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:06.660 16:20:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:06.660 16:20:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:06.660 16:20:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:06.660 16:20:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:06.660 16:20:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:06.660 16:20:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:06.660 16:20:43 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:06.660 16:20:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:05:06.660 16:20:43 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:06.660 16:20:43 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:06.660 16:20:43 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:06.660 16:20:43 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:06.660 16:20:43 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:06.660 16:20:43 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:06.660 16:20:43 -- common/autotest_common.sh@1578 -- # return 0 00:05:06.660 16:20:43 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:05:06.660 16:20:43 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:06.660 16:20:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:06.660 16:20:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.660 16:20:43 -- common/autotest_common.sh@10 -- # set +x 00:05:06.660 ************************************ 00:05:06.660 START TEST unittest 00:05:06.660 ************************************ 00:05:06.660 16:20:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:06.660 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:06.660 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:06.660 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:06.660 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:06.660 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:06.660 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:06.660 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:06.660 ++ rpc_py=rpc_cmd 00:05:06.660 ++ set -e 00:05:06.660 ++ shopt -s nullglob 00:05:06.660 ++ shopt -s extglob 00:05:06.660 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:06.660 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:06.660 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:06.660 +++ CONFIG_FIO_PLUGIN=y 00:05:06.660 +++ CONFIG_NVME_CUSE=y 00:05:06.660 +++ CONFIG_RAID5F=y 00:05:06.660 +++ CONFIG_LTO=n 00:05:06.660 +++ CONFIG_SMA=n 00:05:06.660 +++ CONFIG_ISAL=y 00:05:06.660 +++ CONFIG_OPENSSL_PATH= 00:05:06.660 +++ CONFIG_IDXD_KERNEL=n 00:05:06.660 +++ CONFIG_URING_PATH= 00:05:06.660 +++ CONFIG_DAOS=n 00:05:06.660 +++ CONFIG_DPDK_LIB_DIR= 00:05:06.660 +++ CONFIG_OCF=n 00:05:06.660 +++ CONFIG_EXAMPLES=y 00:05:06.660 +++ CONFIG_RDMA_PROV=verbs 00:05:06.660 +++ CONFIG_ISCSI_INITIATOR=y 00:05:06.660 +++ CONFIG_VTUNE=n 00:05:06.660 +++ CONFIG_DPDK_INC_DIR= 00:05:06.660 +++ CONFIG_CET=n 00:05:06.660 +++ CONFIG_TESTS=y 00:05:06.660 +++ CONFIG_APPS=y 00:05:06.660 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:06.660 +++ CONFIG_DAOS_DIR= 00:05:06.660 +++ CONFIG_CRYPTO_MLX5=n 00:05:06.660 +++ CONFIG_XNVME=n 00:05:06.660 +++ CONFIG_UNIT_TESTS=y 00:05:06.660 +++ CONFIG_FUSE=n 00:05:06.660 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:06.660 +++ CONFIG_OCF_PATH= 00:05:06.660 +++ CONFIG_WPDK_DIR= 00:05:06.660 +++ CONFIG_VFIO_USER=n 00:05:06.660 +++ CONFIG_MAX_LCORES= 00:05:06.660 +++ CONFIG_ARCH=native 00:05:06.660 +++ CONFIG_TSAN=n 00:05:06.660 +++ CONFIG_VIRTIO=y 00:05:06.660 +++ CONFIG_IPSEC_MB=n 00:05:06.660 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:06.660 +++ CONFIG_ASAN=y 00:05:06.660 +++ CONFIG_SHARED=n 00:05:06.660 +++ CONFIG_VTUNE_DIR= 00:05:06.660 +++ CONFIG_RDMA_SET_TOS=y 00:05:06.660 +++ CONFIG_VBDEV_COMPRESS=n 00:05:06.660 +++ CONFIG_VFIO_USER_DIR= 00:05:06.660 +++ CONFIG_FUZZER_LIB= 00:05:06.660 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:06.660 +++ CONFIG_USDT=n 00:05:06.660 +++ CONFIG_URING_ZNS=n 00:05:06.660 +++ CONFIG_FC_PATH= 00:05:06.660 +++ CONFIG_COVERAGE=y 00:05:06.660 +++ CONFIG_CUSTOMOCF=n 00:05:06.660 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:06.660 +++ CONFIG_WERROR=y 00:05:06.660 +++ CONFIG_DEBUG=y 00:05:06.660 +++ CONFIG_RDMA=y 00:05:06.660 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:06.660 +++ CONFIG_FUZZER=n 00:05:06.660 +++ CONFIG_FC=n 00:05:06.660 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:06.660 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:06.660 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:06.660 +++ CONFIG_CROSS_PREFIX= 00:05:06.660 +++ CONFIG_PREFIX=/usr/local 00:05:06.660 +++ CONFIG_HAVE_LIBBSD=n 00:05:06.660 +++ CONFIG_UBSAN=y 00:05:06.660 +++ CONFIG_PGO_CAPTURE=n 00:05:06.660 +++ CONFIG_UBLK=n 00:05:06.660 +++ CONFIG_ISAL_CRYPTO=y 00:05:06.660 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:06.660 +++ CONFIG_CRYPTO=n 00:05:06.660 +++ CONFIG_RBD=n 00:05:06.660 +++ CONFIG_LIBDIR= 00:05:06.660 +++ CONFIG_IPSEC_MB_DIR= 00:05:06.660 +++ CONFIG_PGO_USE=n 00:05:06.660 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:06.660 +++ CONFIG_GOLANG=n 00:05:06.660 +++ CONFIG_VHOST=y 00:05:06.660 +++ CONFIG_IDXD=y 00:05:06.660 +++ CONFIG_AVAHI=n 00:05:06.660 +++ CONFIG_URING=n 00:05:06.660 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:06.660 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:06.660 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:06.660 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:06.660 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:06.660 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:06.660 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:06.660 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:06.660 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:06.660 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:06.660 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:06.660 +++ VHOST_APP=("$_app_dir/vhost") 00:05:06.660 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:06.660 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:06.660 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:06.660 +++ [[ #ifndef SPDK_CONFIG_H 00:05:06.660 #define SPDK_CONFIG_H 00:05:06.660 #define SPDK_CONFIG_APPS 1 00:05:06.660 #define SPDK_CONFIG_ARCH native 00:05:06.660 #define SPDK_CONFIG_ASAN 1 00:05:06.660 #undef SPDK_CONFIG_AVAHI 00:05:06.660 #undef SPDK_CONFIG_CET 00:05:06.660 #define SPDK_CONFIG_COVERAGE 1 00:05:06.660 #define SPDK_CONFIG_CROSS_PREFIX 00:05:06.660 #undef SPDK_CONFIG_CRYPTO 00:05:06.660 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:06.660 #undef SPDK_CONFIG_CUSTOMOCF 00:05:06.660 #undef SPDK_CONFIG_DAOS 00:05:06.660 #define SPDK_CONFIG_DAOS_DIR 00:05:06.660 #define SPDK_CONFIG_DEBUG 1 00:05:06.660 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:06.660 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:06.660 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:06.660 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:06.660 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:06.660 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:06.660 #define SPDK_CONFIG_EXAMPLES 1 00:05:06.660 #undef SPDK_CONFIG_FC 00:05:06.660 #define SPDK_CONFIG_FC_PATH 00:05:06.660 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:06.660 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:06.660 #undef SPDK_CONFIG_FUSE 00:05:06.660 #undef SPDK_CONFIG_FUZZER 00:05:06.660 #define SPDK_CONFIG_FUZZER_LIB 00:05:06.660 #undef SPDK_CONFIG_GOLANG 00:05:06.660 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:06.660 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:06.660 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:06.660 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:06.660 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:06.660 #define SPDK_CONFIG_IDXD 1 00:05:06.660 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:06.660 #undef SPDK_CONFIG_IPSEC_MB 00:05:06.660 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:06.660 #define SPDK_CONFIG_ISAL 1 00:05:06.660 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:06.660 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:06.660 #define SPDK_CONFIG_LIBDIR 00:05:06.660 #undef SPDK_CONFIG_LTO 00:05:06.660 #define SPDK_CONFIG_MAX_LCORES 00:05:06.660 #define SPDK_CONFIG_NVME_CUSE 1 00:05:06.660 #undef SPDK_CONFIG_OCF 00:05:06.660 #define SPDK_CONFIG_OCF_PATH 00:05:06.660 #define SPDK_CONFIG_OPENSSL_PATH 00:05:06.660 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:06.660 #undef SPDK_CONFIG_PGO_USE 00:05:06.660 #define SPDK_CONFIG_PREFIX /usr/local 00:05:06.660 #define SPDK_CONFIG_RAID5F 1 00:05:06.660 #undef SPDK_CONFIG_RBD 00:05:06.660 #define SPDK_CONFIG_RDMA 1 00:05:06.660 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:06.660 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:06.660 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:06.660 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:06.660 #undef SPDK_CONFIG_SHARED 00:05:06.660 #undef SPDK_CONFIG_SMA 00:05:06.660 #define SPDK_CONFIG_TESTS 1 00:05:06.660 #undef SPDK_CONFIG_TSAN 00:05:06.660 #undef SPDK_CONFIG_UBLK 00:05:06.660 #define SPDK_CONFIG_UBSAN 1 00:05:06.660 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:06.660 #undef SPDK_CONFIG_URING 00:05:06.661 #define SPDK_CONFIG_URING_PATH 00:05:06.661 #undef SPDK_CONFIG_URING_ZNS 00:05:06.661 #undef SPDK_CONFIG_USDT 00:05:06.661 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:06.661 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:06.661 #undef SPDK_CONFIG_VFIO_USER 00:05:06.661 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:06.661 #define SPDK_CONFIG_VHOST 1 00:05:06.661 #define SPDK_CONFIG_VIRTIO 1 00:05:06.661 #undef SPDK_CONFIG_VTUNE 00:05:06.661 #define SPDK_CONFIG_VTUNE_DIR 00:05:06.661 #define SPDK_CONFIG_WERROR 1 00:05:06.661 #define SPDK_CONFIG_WPDK_DIR 00:05:06.661 #undef SPDK_CONFIG_XNVME 00:05:06.661 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:06.661 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:06.661 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:06.661 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:06.661 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.661 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.661 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:06.661 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:06.661 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:06.661 ++++ export PATH 00:05:06.661 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:06.661 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:06.661 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:06.661 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:06.661 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:06.661 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:06.661 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:06.661 +++ TEST_TAG=N/A 00:05:06.661 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:06.661 ++ : 1 00:05:06.661 ++ export RUN_NIGHTLY 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_RUN_VALGRIND 00:05:06.661 ++ : 1 00:05:06.661 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:06.661 ++ : 1 00:05:06.661 ++ export SPDK_TEST_UNITTEST 00:05:06.661 ++ : 00:05:06.661 ++ export SPDK_TEST_AUTOBUILD 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_RELEASE_BUILD 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_ISAL 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_ISCSI 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:06.661 ++ : 1 00:05:06.661 ++ export SPDK_TEST_NVME 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_NVME_PMR 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_NVME_BP 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_NVME_CLI 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_NVME_CUSE 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_NVME_FDP 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_NVMF 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_VFIOUSER 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_FUZZER 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_FUZZER_SHORT 00:05:06.661 ++ : rdma 00:05:06.661 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_RBD 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_VHOST 00:05:06.661 ++ : 1 00:05:06.661 ++ export SPDK_TEST_BLOCKDEV 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_IOAT 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_BLOBFS 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_VHOST_INIT 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_LVOL 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:06.661 ++ : 1 00:05:06.661 ++ export SPDK_RUN_ASAN 00:05:06.661 ++ : 1 00:05:06.661 ++ export SPDK_RUN_UBSAN 00:05:06.661 ++ : 00:05:06.661 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_RUN_NON_ROOT 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_CRYPTO 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_FTL 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_OCF 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_VMD 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_OPAL 00:05:06.661 ++ : 00:05:06.661 ++ export SPDK_TEST_NATIVE_DPDK 00:05:06.661 ++ : true 00:05:06.661 ++ export SPDK_AUTOTEST_X 00:05:06.661 ++ : 1 00:05:06.661 ++ export SPDK_TEST_RAID5 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_URING 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_USDT 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_USE_IGB_UIO 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_SCHEDULER 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_SCANBUILD 00:05:06.661 ++ : 00:05:06.661 ++ export SPDK_TEST_NVMF_NICS 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_SMA 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_DAOS 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_XNVME 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_ACCEL_DSA 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_ACCEL_IAA 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_ACCEL_IOAT 00:05:06.661 ++ : 00:05:06.661 ++ export SPDK_TEST_FUZZER_TARGET 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_TEST_NVMF_MDNS 00:05:06.661 ++ : 0 00:05:06.661 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:06.661 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:06.661 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:06.661 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:06.661 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:06.661 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:06.661 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:06.661 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:06.661 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:06.661 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:06.661 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:06.661 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:06.661 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:06.661 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:06.661 ++ PYTHONDONTWRITEBYTECODE=1 00:05:06.661 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:06.661 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:06.661 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:06.661 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:06.661 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:06.661 ++ rm -rf /var/tmp/asan_suppression_file 00:05:06.661 ++ cat 00:05:06.661 ++ echo leak:libfuse3.so 00:05:06.661 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:06.661 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:06.661 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:06.661 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:06.661 ++ '[' -z /var/spdk/dependencies ']' 00:05:06.661 ++ export DEPENDENCY_DIR 00:05:06.661 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:06.661 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:06.661 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:06.661 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:06.661 ++ export QEMU_BIN= 00:05:06.661 ++ QEMU_BIN= 00:05:06.661 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:06.661 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:06.661 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:06.661 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:06.661 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:06.661 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:06.661 ++ '[' 0 -eq 0 ']' 00:05:06.661 ++ export valgrind= 00:05:06.661 ++ valgrind= 00:05:06.661 +++ uname -s 00:05:06.661 ++ '[' Linux = Linux ']' 00:05:06.661 ++ HUGEMEM=4096 00:05:06.661 ++ export CLEAR_HUGE=yes 00:05:06.661 ++ CLEAR_HUGE=yes 00:05:06.661 ++ [[ 0 -eq 1 ]] 00:05:06.661 ++ [[ 0 -eq 1 ]] 00:05:06.661 ++ MAKE=make 00:05:06.661 +++ nproc 00:05:06.661 ++ MAKEFLAGS=-j10 00:05:06.661 ++ export HUGEMEM=4096 00:05:06.661 ++ HUGEMEM=4096 00:05:06.661 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:06.661 ++ NO_HUGE=() 00:05:06.661 ++ TEST_MODE= 00:05:06.661 ++ [[ -z '' ]] 00:05:06.661 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:06.661 ++ exec 00:05:06.661 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:06.661 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:06.661 ++ set_test_storage 2147483648 00:05:06.661 ++ [[ -v testdir ]] 00:05:06.661 ++ local requested_size=2147483648 00:05:06.661 ++ local mount target_dir 00:05:06.661 ++ local -A mounts fss sizes avails uses 00:05:06.661 ++ local source fs size avail mount use 00:05:06.661 ++ local storage_fallback storage_candidates 00:05:06.661 +++ mktemp -udt spdk.XXXXXX 00:05:06.661 ++ storage_fallback=/tmp/spdk.d6RnPP 00:05:06.661 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:06.661 ++ [[ -n '' ]] 00:05:06.661 ++ [[ -n '' ]] 00:05:06.661 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.d6RnPP/tests/unit /tmp/spdk.d6RnPP 00:05:06.661 ++ requested_size=2214592512 00:05:06.661 ++ read -r source fs size use avail _ mount 00:05:06.661 +++ df -T 00:05:06.661 +++ grep -v Filesystem 00:05:06.661 ++ mounts["$mount"]=udev 00:05:06.661 ++ fss["$mount"]=devtmpfs 00:05:06.661 ++ avails["$mount"]=6224461824 00:05:06.661 ++ sizes["$mount"]=6224461824 00:05:06.661 ++ uses["$mount"]=0 00:05:06.661 ++ read -r source fs size use avail _ mount 00:05:06.661 ++ mounts["$mount"]=tmpfs 00:05:06.661 ++ fss["$mount"]=tmpfs 00:05:06.661 ++ avails["$mount"]=1253408768 00:05:06.661 ++ sizes["$mount"]=1254514688 00:05:06.661 ++ uses["$mount"]=1105920 00:05:06.661 ++ read -r source fs size use avail _ mount 00:05:06.661 ++ mounts["$mount"]=/dev/vda1 00:05:06.661 ++ fss["$mount"]=ext4 00:05:06.661 ++ avails["$mount"]=10737590272 00:05:06.661 ++ sizes["$mount"]=20616794112 00:05:06.661 ++ uses["$mount"]=9862426624 00:05:06.661 ++ read -r source fs size use avail _ mount 00:05:06.661 ++ mounts["$mount"]=tmpfs 00:05:06.661 ++ fss["$mount"]=tmpfs 00:05:06.661 ++ avails["$mount"]=6272561152 00:05:06.661 ++ sizes["$mount"]=6272561152 00:05:06.661 ++ uses["$mount"]=0 00:05:06.661 ++ read -r source fs size use avail _ mount 00:05:06.661 ++ mounts["$mount"]=tmpfs 00:05:06.661 ++ fss["$mount"]=tmpfs 00:05:06.661 ++ avails["$mount"]=5242880 00:05:06.661 ++ sizes["$mount"]=5242880 00:05:06.661 ++ uses["$mount"]=0 00:05:06.661 ++ read -r source fs size use avail _ mount 00:05:06.661 ++ mounts["$mount"]=tmpfs 00:05:06.661 ++ fss["$mount"]=tmpfs 00:05:06.661 ++ avails["$mount"]=6272561152 00:05:06.661 ++ sizes["$mount"]=6272561152 00:05:06.661 ++ uses["$mount"]=0 00:05:06.661 ++ read -r source fs size use avail _ mount 00:05:06.661 ++ mounts["$mount"]=/dev/vda15 00:05:06.661 ++ fss["$mount"]=vfat 00:05:06.661 ++ avails["$mount"]=103089152 00:05:06.661 ++ sizes["$mount"]=109422592 00:05:06.661 ++ uses["$mount"]=6334464 00:05:06.661 ++ read -r source fs size use avail _ mount 00:05:06.661 ++ mounts["$mount"]=/dev/loop0 00:05:06.661 ++ fss["$mount"]=squashfs 00:05:06.661 ++ avails["$mount"]=0 00:05:06.661 ++ sizes["$mount"]=67108864 00:05:06.661 ++ uses["$mount"]=67108864 00:05:06.661 ++ read -r source fs size use avail _ mount 00:05:06.661 ++ mounts["$mount"]=/dev/loop2 00:05:06.661 ++ fss["$mount"]=squashfs 00:05:06.661 ++ avails["$mount"]=0 00:05:06.661 ++ sizes["$mount"]=41025536 00:05:06.661 ++ uses["$mount"]=41025536 00:05:06.661 ++ read -r source fs size use avail _ mount 00:05:06.661 ++ mounts["$mount"]=/dev/loop1 00:05:06.661 ++ fss["$mount"]=squashfs 00:05:06.661 ++ avails["$mount"]=0 00:05:06.661 ++ sizes["$mount"]=96337920 00:05:06.661 ++ uses["$mount"]=96337920 00:05:06.661 ++ read -r source fs size use avail _ mount 00:05:06.661 ++ mounts["$mount"]=tmpfs 00:05:06.662 ++ fss["$mount"]=tmpfs 00:05:06.662 ++ avails["$mount"]=1254510592 00:05:06.662 ++ sizes["$mount"]=1254510592 00:05:06.662 ++ uses["$mount"]=0 00:05:06.662 ++ read -r source fs size use avail _ mount 00:05:06.662 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:05:06.662 ++ fss["$mount"]=fuse.sshfs 00:05:06.662 ++ avails["$mount"]=96563650560 00:05:06.662 ++ sizes["$mount"]=105088212992 00:05:06.662 ++ uses["$mount"]=3139129344 00:05:06.662 ++ read -r source fs size use avail _ mount 00:05:06.662 ++ printf '* Looking for test storage...\n' 00:05:06.662 * Looking for test storage... 00:05:06.662 ++ local target_space new_size 00:05:06.662 ++ for target_dir in "${storage_candidates[@]}" 00:05:06.662 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:06.662 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:06.662 ++ mount=/ 00:05:06.662 ++ target_space=10737590272 00:05:06.662 ++ (( target_space == 0 || target_space < requested_size )) 00:05:06.662 ++ (( target_space >= requested_size )) 00:05:06.662 ++ [[ ext4 == tmpfs ]] 00:05:06.662 ++ [[ ext4 == ramfs ]] 00:05:06.662 ++ [[ / == / ]] 00:05:06.662 ++ new_size=12077019136 00:05:06.662 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:06.662 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:06.662 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:06.662 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:06.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:06.662 ++ return 0 00:05:06.662 ++ set -o errtrace 00:05:06.662 ++ shopt -s extdebug 00:05:06.662 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:06.662 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:06.662 16:20:43 -- common/autotest_common.sh@1672 -- # true 00:05:06.662 16:20:43 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:05:06.662 16:20:43 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:06.662 16:20:43 -- common/autotest_common.sh@29 -- # exec 00:05:06.662 16:20:43 -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:06.662 16:20:43 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:06.662 16:20:43 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:06.662 16:20:43 -- common/autotest_common.sh@18 -- # set -x 00:05:06.662 16:20:43 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:06.662 16:20:43 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:05:06.662 16:20:43 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:05:06.662 16:20:43 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:05:06.662 16:20:43 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:06.662 16:20:43 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:05:06.662 16:20:43 -- unit/unittest.sh@179 -- # hash lcov 00:05:06.662 16:20:43 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:06.662 16:20:43 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:06.662 16:20:43 -- unit/unittest.sh@180 -- # cov_avail=yes 00:05:06.662 16:20:43 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:05:06.662 16:20:43 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:06.662 16:20:43 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:06.662 16:20:43 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:06.662 16:20:43 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:05:06.662 --rc lcov_branch_coverage=1 00:05:06.662 --rc lcov_function_coverage=1 00:05:06.662 --rc genhtml_branch_coverage=1 00:05:06.662 --rc genhtml_function_coverage=1 00:05:06.662 --rc genhtml_legend=1 00:05:06.662 --rc geninfo_all_blocks=1 00:05:06.662 ' 00:05:06.662 16:20:43 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:05:06.662 --rc lcov_branch_coverage=1 00:05:06.662 --rc lcov_function_coverage=1 00:05:06.662 --rc genhtml_branch_coverage=1 00:05:06.662 --rc genhtml_function_coverage=1 00:05:06.662 --rc genhtml_legend=1 00:05:06.662 --rc geninfo_all_blocks=1 00:05:06.662 ' 00:05:06.662 16:20:43 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:05:06.662 --rc lcov_branch_coverage=1 00:05:06.662 --rc lcov_function_coverage=1 00:05:06.662 --rc genhtml_branch_coverage=1 00:05:06.662 --rc genhtml_function_coverage=1 00:05:06.662 --rc genhtml_legend=1 00:05:06.662 --rc geninfo_all_blocks=1 00:05:06.662 --no-external' 00:05:06.662 16:20:43 -- unit/unittest.sh@200 -- # LCOV='lcov 00:05:06.662 --rc lcov_branch_coverage=1 00:05:06.662 --rc lcov_function_coverage=1 00:05:06.662 --rc genhtml_branch_coverage=1 00:05:06.662 --rc genhtml_function_coverage=1 00:05:06.662 --rc genhtml_legend=1 00:05:06.662 --rc geninfo_all_blocks=1 00:05:06.662 --no-external' 00:05:06.662 16:20:43 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:08.563 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:08.563 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:08.564 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:08.564 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:08.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:08.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:08.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:08.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:08.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:08.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:08.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:08.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:08.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:08.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:08.823 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:08.823 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:55.516 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:55.516 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:55.516 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:55.516 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:55.516 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:55.516 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:55.516 16:21:29 -- unit/unittest.sh@206 -- # uname -m 00:05:55.516 16:21:29 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:05:55.516 16:21:29 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:55.516 16:21:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.516 16:21:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.516 16:21:29 -- common/autotest_common.sh@10 -- # set +x 00:05:55.516 ************************************ 00:05:55.516 START TEST unittest_pci_event 00:05:55.516 ************************************ 00:05:55.516 16:21:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:55.516 00:05:55.516 00:05:55.516 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.516 http://cunit.sourceforge.net/ 00:05:55.516 00:05:55.516 00:05:55.516 Suite: pci_event 00:05:55.516 Test: test_pci_parse_event ...[2024-07-11 16:21:29.988601] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:55.516 [2024-07-11 16:21:29.989277] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:55.516 passed 00:05:55.516 00:05:55.516 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.516 suites 1 1 n/a 0 0 00:05:55.516 tests 1 1 1 0 0 00:05:55.516 asserts 15 15 15 0 n/a 00:05:55.516 00:05:55.516 Elapsed time = 0.001 seconds 00:05:55.516 00:05:55.516 real 0m0.039s 00:05:55.516 user 0m0.012s 00:05:55.516 sys 0m0.023s 00:05:55.516 16:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.516 16:21:30 -- common/autotest_common.sh@10 -- # set +x 00:05:55.516 ************************************ 00:05:55.516 END TEST unittest_pci_event 00:05:55.516 ************************************ 00:05:55.516 16:21:30 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:55.516 16:21:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.516 16:21:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.516 16:21:30 -- common/autotest_common.sh@10 -- # set +x 00:05:55.516 ************************************ 00:05:55.516 START TEST unittest_include 00:05:55.516 ************************************ 00:05:55.516 16:21:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:55.516 00:05:55.516 00:05:55.516 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.516 http://cunit.sourceforge.net/ 00:05:55.516 00:05:55.516 00:05:55.516 Suite: histogram 00:05:55.516 Test: histogram_test ...passed 00:05:55.516 Test: histogram_merge ...passed 00:05:55.516 00:05:55.516 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.516 suites 1 1 n/a 0 0 00:05:55.516 tests 2 2 2 0 0 00:05:55.516 asserts 50 50 50 0 n/a 00:05:55.516 00:05:55.516 Elapsed time = 0.006 seconds 00:05:55.516 00:05:55.516 real 0m0.038s 00:05:55.516 user 0m0.012s 00:05:55.516 sys 0m0.027s 00:05:55.516 16:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.516 16:21:30 -- common/autotest_common.sh@10 -- # set +x 00:05:55.516 ************************************ 00:05:55.516 END TEST unittest_include 00:05:55.516 ************************************ 00:05:55.516 16:21:30 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:55.516 16:21:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.516 16:21:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.516 16:21:30 -- common/autotest_common.sh@10 -- # set +x 00:05:55.516 ************************************ 00:05:55.516 START TEST unittest_bdev 00:05:55.516 ************************************ 00:05:55.516 16:21:30 -- common/autotest_common.sh@1104 -- # unittest_bdev 00:05:55.516 16:21:30 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:55.516 00:05:55.516 00:05:55.516 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.517 http://cunit.sourceforge.net/ 00:05:55.517 00:05:55.517 00:05:55.517 Suite: bdev 00:05:55.517 Test: bytes_to_blocks_test ...passed 00:05:55.517 Test: num_blocks_test ...passed 00:05:55.517 Test: io_valid_test ...passed 00:05:55.517 Test: open_write_test ...[2024-07-11 16:21:30.236591] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:55.517 [2024-07-11 16:21:30.236962] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:55.517 [2024-07-11 16:21:30.237099] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:55.517 passed 00:05:55.517 Test: claim_test ...passed 00:05:55.517 Test: alias_add_del_test ...[2024-07-11 16:21:30.332220] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:55.517 [2024-07-11 16:21:30.332349] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:55.517 [2024-07-11 16:21:30.332409] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:55.517 passed 00:05:55.517 Test: get_device_stat_test ...passed 00:05:55.517 Test: bdev_io_types_test ...passed 00:05:55.517 Test: bdev_io_wait_test ...passed 00:05:55.517 Test: bdev_io_spans_split_test ...passed 00:05:55.517 Test: bdev_io_boundary_split_test ...passed 00:05:55.517 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-11 16:21:30.530219] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:55.517 passed 00:05:55.517 Test: bdev_io_mix_split_test ...passed 00:05:55.517 Test: bdev_io_split_with_io_wait ...passed 00:05:55.517 Test: bdev_io_write_unit_split_test ...[2024-07-11 16:21:30.647779] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:55.517 [2024-07-11 16:21:30.647904] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:55.517 [2024-07-11 16:21:30.647933] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:55.517 [2024-07-11 16:21:30.648006] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:55.517 passed 00:05:55.517 Test: bdev_io_alignment_with_boundary ...passed 00:05:55.517 Test: bdev_io_alignment ...passed 00:05:55.517 Test: bdev_histograms ...passed 00:05:55.517 Test: bdev_write_zeroes ...passed 00:05:55.517 Test: bdev_compare_and_write ...passed 00:05:55.517 Test: bdev_compare ...passed 00:05:55.517 Test: bdev_compare_emulated ...passed 00:05:55.517 Test: bdev_zcopy_write ...passed 00:05:55.517 Test: bdev_zcopy_read ...passed 00:05:55.517 Test: bdev_open_while_hotremove ...passed 00:05:55.517 Test: bdev_close_while_hotremove ...passed 00:05:55.517 Test: bdev_open_ext_test ...[2024-07-11 16:21:31.108846] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:55.517 passed 00:05:55.517 Test: bdev_open_ext_unregister ...[2024-07-11 16:21:31.109085] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:55.517 passed 00:05:55.517 Test: bdev_set_io_timeout ...passed 00:05:55.517 Test: bdev_set_qd_sampling ...passed 00:05:55.517 Test: lba_range_overlap ...passed 00:05:55.517 Test: lock_lba_range_check_ranges ...passed 00:05:55.517 Test: lock_lba_range_with_io_outstanding ...passed 00:05:55.517 Test: lock_lba_range_overlapped ...passed 00:05:55.517 Test: bdev_quiesce ...[2024-07-11 16:21:31.326871] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:55.517 passed 00:05:55.517 Test: bdev_io_abort ...passed 00:05:55.517 Test: bdev_unmap ...passed 00:05:55.517 Test: bdev_write_zeroes_split_test ...passed 00:05:55.517 Test: bdev_set_options_test ...[2024-07-11 16:21:31.468249] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:55.517 passed 00:05:55.517 Test: bdev_get_memory_domains ...passed 00:05:55.517 Test: bdev_io_ext ...passed 00:05:55.517 Test: bdev_io_ext_no_opts ...passed 00:05:55.517 Test: bdev_io_ext_invalid_opts ...passed 00:05:55.517 Test: bdev_io_ext_split ...passed 00:05:55.517 Test: bdev_io_ext_bounce_buffer ...passed 00:05:55.517 Test: bdev_register_uuid_alias ...[2024-07-11 16:21:31.687502] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 7859837b-c9ab-47c0-81cf-9b815dedd433 already exists 00:05:55.517 [2024-07-11 16:21:31.687598] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:7859837b-c9ab-47c0-81cf-9b815dedd433 alias for bdev bdev0 00:05:55.517 passed 00:05:55.517 Test: bdev_unregister_by_name ...[2024-07-11 16:21:31.709302] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:55.517 [2024-07-11 16:21:31.709367] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:55.517 passed 00:05:55.517 Test: for_each_bdev_test ...passed 00:05:55.517 Test: bdev_seek_test ...passed 00:05:55.517 Test: bdev_copy ...passed 00:05:55.517 Test: bdev_copy_split_test ...passed 00:05:55.517 Test: examine_locks ...passed 00:05:55.517 Test: claim_v2_rwo ...[2024-07-11 16:21:31.821552] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.821639] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.821660] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.821721] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.821738] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.517 passed 00:05:55.517 Test: claim_v2_rom ...[2024-07-11 16:21:31.821776] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:55.517 [2024-07-11 16:21:31.821905] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.821963] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.821991] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.822014] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.822051] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:55.517 [2024-07-11 16:21:31.822084] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:55.517 passed 00:05:55.517 Test: claim_v2_rwm ...[2024-07-11 16:21:31.822228] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:55.517 [2024-07-11 16:21:31.822293] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.822330] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.822354] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.822371] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.822394] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:55.517 [2024-07-11 16:21:31.822448] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:55.517 passed 00:05:55.517 Test: claim_v2_existing_writer ...[2024-07-11 16:21:31.822631] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:55.518 [2024-07-11 16:21:31.822661] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:55.518 passed 00:05:55.518 Test: claim_v2_existing_v1 ...[2024-07-11 16:21:31.822772] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:55.518 passed 00:05:55.518 Test: claim_v1_existing_v2 ...[2024-07-11 16:21:31.822805] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:55.518 [2024-07-11 16:21:31.822822] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:55.518 [2024-07-11 16:21:31.822965] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:55.518 [2024-07-11 16:21:31.823022] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:55.518 [2024-07-11 16:21:31.823057] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:55.518 passed 00:05:55.518 Test: examine_claimed ...[2024-07-11 16:21:31.823381] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:55.518 passed 00:05:55.518 00:05:55.518 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.518 suites 1 1 n/a 0 0 00:05:55.518 tests 59 59 59 0 0 00:05:55.518 asserts 4599 4599 4599 0 n/a 00:05:55.518 00:05:55.518 Elapsed time = 1.662 seconds 00:05:55.518 16:21:31 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:55.518 00:05:55.518 00:05:55.518 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.518 http://cunit.sourceforge.net/ 00:05:55.518 00:05:55.518 00:05:55.518 Suite: nvme 00:05:55.518 Test: test_create_ctrlr ...passed 00:05:55.518 Test: test_reset_ctrlr ...[2024-07-11 16:21:31.876788] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 passed 00:05:55.518 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:55.518 Test: test_failover_ctrlr ...passed 00:05:55.518 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-11 16:21:31.879417] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 [2024-07-11 16:21:31.879643] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 [2024-07-11 16:21:31.879856] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 passed 00:05:55.518 Test: test_pending_reset ...[2024-07-11 16:21:31.881387] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 [2024-07-11 16:21:31.881661] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 passed 00:05:55.518 Test: test_attach_ctrlr ...[2024-07-11 16:21:31.882805] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:55.518 passed 00:05:55.518 Test: test_aer_cb ...passed 00:05:55.518 Test: test_submit_nvme_cmd ...passed 00:05:55.518 Test: test_add_remove_trid ...passed 00:05:55.518 Test: test_abort ...[2024-07-11 16:21:31.886248] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:55.518 passed 00:05:55.518 Test: test_get_io_qpair ...passed 00:05:55.518 Test: test_bdev_unregister ...passed 00:05:55.518 Test: test_compare_ns ...passed 00:05:55.518 Test: test_init_ana_log_page ...passed 00:05:55.518 Test: test_get_memory_domains ...passed 00:05:55.518 Test: test_reconnect_qpair ...[2024-07-11 16:21:31.889143] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 passed 00:05:55.518 Test: test_create_bdev_ctrlr ...[2024-07-11 16:21:31.889698] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:55.518 passed 00:05:55.518 Test: test_add_multi_ns_to_bdev ...[2024-07-11 16:21:31.891071] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:55.518 passed 00:05:55.518 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:55.518 Test: test_admin_path ...passed 00:05:55.518 Test: test_reset_bdev_ctrlr ...passed 00:05:55.518 Test: test_find_io_path ...passed 00:05:55.518 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:55.518 Test: test_retry_io_for_io_path_error ...passed 00:05:55.518 Test: test_retry_io_count ...passed 00:05:55.518 Test: test_concurrent_read_ana_log_page ...passed 00:05:55.518 Test: test_retry_io_for_ana_error ...passed 00:05:55.518 Test: test_check_io_error_resiliency_params ...[2024-07-11 16:21:31.898632] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:55.518 [2024-07-11 16:21:31.898711] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:55.518 [2024-07-11 16:21:31.898746] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:55.518 [2024-07-11 16:21:31.898773] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:55.518 [2024-07-11 16:21:31.898793] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:55.518 [2024-07-11 16:21:31.898831] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:55.518 passed 00:05:55.518 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-11 16:21:31.898854] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:55.518 [2024-07-11 16:21:31.898911] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:55.518 [2024-07-11 16:21:31.898953] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:55.518 passed 00:05:55.518 Test: test_reconnect_ctrlr ...[2024-07-11 16:21:31.899821] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 [2024-07-11 16:21:31.900001] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 [2024-07-11 16:21:31.900287] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 [2024-07-11 16:21:31.900458] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 [2024-07-11 16:21:31.900610] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 passed 00:05:55.518 Test: test_retry_failover_ctrlr ...[2024-07-11 16:21:31.901036] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 passed 00:05:55.518 Test: test_fail_path ...[2024-07-11 16:21:31.901702] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 [2024-07-11 16:21:31.901906] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 [2024-07-11 16:21:31.902022] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 [2024-07-11 16:21:31.902151] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 [2024-07-11 16:21:31.902310] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 passed 00:05:55.518 Test: test_nvme_ns_cmp ...passed 00:05:55.518 Test: test_ana_transition ...passed 00:05:55.518 Test: test_set_preferred_path ...passed 00:05:55.518 Test: test_find_next_io_path ...passed 00:05:55.518 Test: test_find_io_path_min_qd ...passed 00:05:55.518 Test: test_disable_auto_failback ...[2024-07-11 16:21:31.904134] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.518 passed 00:05:55.518 Test: test_set_multipath_policy ...passed 00:05:55.518 Test: test_uuid_generation ...passed 00:05:55.518 Test: test_retry_io_to_same_path ...passed 00:05:55.518 Test: test_race_between_reset_and_disconnected ...passed 00:05:55.518 Test: test_ctrlr_op_rpc ...passed 00:05:55.519 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:55.519 Test: test_disable_enable_ctrlr ...[2024-07-11 16:21:31.908085] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.519 [2024-07-11 16:21:31.908290] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:55.519 passed 00:05:55.519 Test: test_delete_ctrlr_done ...passed 00:05:55.519 Test: test_ns_remove_during_reset ...passed 00:05:55.519 00:05:55.519 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.519 suites 1 1 n/a 0 0 00:05:55.519 tests 48 48 48 0 0 00:05:55.519 asserts 3553 3553 3553 0 n/a 00:05:55.519 00:05:55.519 Elapsed time = 0.034 seconds 00:05:55.519 16:21:31 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:55.519 Test Options 00:05:55.519 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:55.519 00:05:55.519 00:05:55.519 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.519 http://cunit.sourceforge.net/ 00:05:55.519 00:05:55.519 00:05:55.519 Suite: raid 00:05:55.519 Test: test_create_raid ...passed 00:05:55.519 Test: test_create_raid_superblock ...passed 00:05:55.519 Test: test_delete_raid ...passed 00:05:55.519 Test: test_create_raid_invalid_args ...[2024-07-11 16:21:31.954659] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:55.519 [2024-07-11 16:21:31.955102] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:55.519 [2024-07-11 16:21:31.955567] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:55.519 [2024-07-11 16:21:31.955831] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:55.519 [2024-07-11 16:21:31.956612] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:55.519 passed 00:05:55.519 Test: test_delete_raid_invalid_args ...passed 00:05:55.519 Test: test_io_channel ...passed 00:05:55.519 Test: test_reset_io ...passed 00:05:55.519 Test: test_write_io ...passed 00:05:55.519 Test: test_read_io ...passed 00:05:56.086 Test: test_unmap_io ...passed 00:05:56.086 Test: test_io_failure ...[2024-07-11 16:21:32.890075] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:56.086 passed 00:05:56.086 Test: test_multi_raid_no_io ...passed 00:05:56.086 Test: test_multi_raid_with_io ...passed 00:05:56.086 Test: test_io_type_supported ...passed 00:05:56.359 Test: test_raid_json_dump_info ...passed 00:05:56.359 Test: test_context_size ...passed 00:05:56.359 Test: test_raid_level_conversions ...passed 00:05:56.359 Test: test_raid_process ...passed 00:05:56.359 Test: test_raid_io_split ...passed 00:05:56.359 00:05:56.359 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.359 suites 1 1 n/a 0 0 00:05:56.359 tests 19 19 19 0 0 00:05:56.359 asserts 177879 177879 177879 0 n/a 00:05:56.359 00:05:56.359 Elapsed time = 0.946 seconds 00:05:56.359 16:21:32 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:56.359 00:05:56.359 00:05:56.359 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.359 http://cunit.sourceforge.net/ 00:05:56.359 00:05:56.359 00:05:56.359 Suite: raid_sb 00:05:56.359 Test: test_raid_bdev_write_superblock ...passed 00:05:56.359 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:56.359 Test: test_raid_bdev_parse_superblock ...[2024-07-11 16:21:32.943051] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:56.359 passed 00:05:56.359 00:05:56.359 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.359 suites 1 1 n/a 0 0 00:05:56.359 tests 3 3 3 0 0 00:05:56.359 asserts 32 32 32 0 n/a 00:05:56.359 00:05:56.359 Elapsed time = 0.001 seconds 00:05:56.359 16:21:32 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:56.359 00:05:56.359 00:05:56.359 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.359 http://cunit.sourceforge.net/ 00:05:56.359 00:05:56.359 00:05:56.359 Suite: concat 00:05:56.359 Test: test_concat_start ...passed 00:05:56.359 Test: test_concat_rw ...passed 00:05:56.359 Test: test_concat_null_payload ...passed 00:05:56.359 00:05:56.359 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.359 suites 1 1 n/a 0 0 00:05:56.359 tests 3 3 3 0 0 00:05:56.359 asserts 8097 8097 8097 0 n/a 00:05:56.359 00:05:56.359 Elapsed time = 0.007 seconds 00:05:56.359 16:21:33 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:56.359 00:05:56.359 00:05:56.359 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.359 http://cunit.sourceforge.net/ 00:05:56.359 00:05:56.359 00:05:56.359 Suite: raid1 00:05:56.359 Test: test_raid1_start ...passed 00:05:56.359 Test: test_raid1_read_balancing ...passed 00:05:56.359 00:05:56.359 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.359 suites 1 1 n/a 0 0 00:05:56.359 tests 2 2 2 0 0 00:05:56.359 asserts 2856 2856 2856 0 n/a 00:05:56.359 00:05:56.359 Elapsed time = 0.003 seconds 00:05:56.359 16:21:33 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:56.359 00:05:56.359 00:05:56.359 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.359 http://cunit.sourceforge.net/ 00:05:56.359 00:05:56.359 00:05:56.359 Suite: zone 00:05:56.359 Test: test_zone_get_operation ...passed 00:05:56.359 Test: test_bdev_zone_get_info ...passed 00:05:56.359 Test: test_bdev_zone_management ...passed 00:05:56.359 Test: test_bdev_zone_append ...passed 00:05:56.359 Test: test_bdev_zone_append_with_md ...passed 00:05:56.359 Test: test_bdev_zone_appendv ...passed 00:05:56.359 Test: test_bdev_zone_appendv_with_md ...passed 00:05:56.359 Test: test_bdev_io_get_append_location ...passed 00:05:56.359 00:05:56.359 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.359 suites 1 1 n/a 0 0 00:05:56.359 tests 8 8 8 0 0 00:05:56.359 asserts 94 94 94 0 n/a 00:05:56.359 00:05:56.359 Elapsed time = 0.001 seconds 00:05:56.359 16:21:33 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:56.359 00:05:56.359 00:05:56.359 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.359 http://cunit.sourceforge.net/ 00:05:56.359 00:05:56.359 00:05:56.359 Suite: gpt_parse 00:05:56.359 Test: test_parse_mbr_and_primary ...[2024-07-11 16:21:33.076008] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:56.359 [2024-07-11 16:21:33.076253] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:56.359 [2024-07-11 16:21:33.076296] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:56.359 [2024-07-11 16:21:33.076354] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:56.359 [2024-07-11 16:21:33.076385] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:56.359 [2024-07-11 16:21:33.076443] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:56.359 passed 00:05:56.359 Test: test_parse_secondary ...[2024-07-11 16:21:33.077080] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:56.359 [2024-07-11 16:21:33.077138] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:56.359 [2024-07-11 16:21:33.077163] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:56.359 [2024-07-11 16:21:33.077185] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:56.359 passed 00:05:56.359 Test: test_check_mbr ...[2024-07-11 16:21:33.077774] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:56.359 passed 00:05:56.359 Test: test_read_header ...[2024-07-11 16:21:33.077809] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:56.359 [2024-07-11 16:21:33.077852] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:56.359 [2024-07-11 16:21:33.077924] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:56.359 [2024-07-11 16:21:33.077989] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:56.359 [2024-07-11 16:21:33.078023] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:56.359 [2024-07-11 16:21:33.078050] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:56.359 [2024-07-11 16:21:33.078072] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:56.359 passed 00:05:56.359 Test: test_read_partitions ...[2024-07-11 16:21:33.078111] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:56.359 [2024-07-11 16:21:33.078147] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:56.359 [2024-07-11 16:21:33.078170] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:56.359 [2024-07-11 16:21:33.078187] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:56.359 [2024-07-11 16:21:33.078574] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:56.359 passed 00:05:56.359 00:05:56.359 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.359 suites 1 1 n/a 0 0 00:05:56.359 tests 5 5 5 0 0 00:05:56.359 asserts 33 33 33 0 n/a 00:05:56.359 00:05:56.359 Elapsed time = 0.003 seconds 00:05:56.359 16:21:33 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:56.359 00:05:56.359 00:05:56.359 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.359 http://cunit.sourceforge.net/ 00:05:56.359 00:05:56.359 00:05:56.359 Suite: bdev_part 00:05:56.359 Test: part_test ...[2024-07-11 16:21:33.110323] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:56.359 passed 00:05:56.359 Test: part_free_test ...passed 00:05:56.626 Test: part_get_io_channel_test ...passed 00:05:56.626 Test: part_construct_ext ...passed 00:05:56.626 00:05:56.626 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.626 suites 1 1 n/a 0 0 00:05:56.626 tests 4 4 4 0 0 00:05:56.626 asserts 48 48 48 0 n/a 00:05:56.626 00:05:56.626 Elapsed time = 0.057 seconds 00:05:56.626 16:21:33 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:56.626 00:05:56.626 00:05:56.626 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.626 http://cunit.sourceforge.net/ 00:05:56.626 00:05:56.626 00:05:56.626 Suite: scsi_nvme_suite 00:05:56.626 Test: scsi_nvme_translate_test ...passed 00:05:56.626 00:05:56.626 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.626 suites 1 1 n/a 0 0 00:05:56.626 tests 1 1 1 0 0 00:05:56.626 asserts 104 104 104 0 n/a 00:05:56.626 00:05:56.626 Elapsed time = 0.000 seconds 00:05:56.626 16:21:33 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:56.626 00:05:56.626 00:05:56.626 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.626 http://cunit.sourceforge.net/ 00:05:56.626 00:05:56.626 00:05:56.626 Suite: lvol 00:05:56.626 Test: ut_lvs_init ...[2024-07-11 16:21:33.248501] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:56.626 [2024-07-11 16:21:33.249707] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:56.626 passed 00:05:56.626 Test: ut_lvol_init ...passed 00:05:56.626 Test: ut_lvol_snapshot ...passed 00:05:56.626 Test: ut_lvol_clone ...passed 00:05:56.626 Test: ut_lvs_destroy ...passed 00:05:56.626 Test: ut_lvs_unload ...passed 00:05:56.626 Test: ut_lvol_resize ...[2024-07-11 16:21:33.254792] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:56.626 passed 00:05:56.626 Test: ut_lvol_set_read_only ...passed 00:05:56.626 Test: ut_lvol_hotremove ...passed 00:05:56.626 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:56.626 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:56.626 Test: ut_lvol_read_write ...passed 00:05:56.626 Test: ut_vbdev_lvol_submit_request ...passed 00:05:56.626 Test: ut_lvol_examine_config ...passed 00:05:56.626 Test: ut_lvol_examine_disk ...[2024-07-11 16:21:33.258832] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:56.626 passed 00:05:56.626 Test: ut_lvol_rename ...[2024-07-11 16:21:33.260566] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:56.626 [2024-07-11 16:21:33.260923] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:56.626 passed 00:05:56.626 Test: ut_bdev_finish ...passed 00:05:56.626 Test: ut_lvs_rename ...passed 00:05:56.626 Test: ut_lvol_seek ...passed 00:05:56.626 Test: ut_esnap_dev_create ...[2024-07-11 16:21:33.263393] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:56.626 [2024-07-11 16:21:33.263679] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:56.626 [2024-07-11 16:21:33.263921] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:56.626 [2024-07-11 16:21:33.264185] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:56.626 passed 00:05:56.626 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-11 16:21:33.264990] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:56.626 [2024-07-11 16:21:33.265245] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:56.626 passed 00:05:56.626 00:05:56.626 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.626 suites 1 1 n/a 0 0 00:05:56.626 tests 21 21 21 0 0 00:05:56.626 asserts 712 712 712 0 n/a 00:05:56.626 00:05:56.626 Elapsed time = 0.009 seconds 00:05:56.626 16:21:33 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:56.626 00:05:56.626 00:05:56.626 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.626 http://cunit.sourceforge.net/ 00:05:56.626 00:05:56.626 00:05:56.626 Suite: zone_block 00:05:56.626 Test: test_zone_block_create ...passed 00:05:56.626 Test: test_zone_block_create_invalid ...[2024-07-11 16:21:33.314967] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:56.626 [2024-07-11 16:21:33.315381] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-11 16:21:33.315612] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:56.626 [2024-07-11 16:21:33.315762] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-11 16:21:33.315997] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:56.626 [2024-07-11 16:21:33.316119] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-11 16:21:33.316273] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:56.626 [2024-07-11 16:21:33.316411] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:56.626 Test: test_get_zone_info ...[2024-07-11 16:21:33.317133] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.317299] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.317432] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 passed 00:05:56.626 Test: test_supported_io_types ...passed 00:05:56.626 Test: test_reset_zone ...[2024-07-11 16:21:33.318533] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.318676] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 passed 00:05:56.626 Test: test_open_zone ...[2024-07-11 16:21:33.319271] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.319944] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.320091] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 passed 00:05:56.626 Test: test_zone_write ...[2024-07-11 16:21:33.320697] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:56.626 [2024-07-11 16:21:33.320828] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.321116] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:56.626 [2024-07-11 16:21:33.321257] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.325892] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:56.626 [2024-07-11 16:21:33.326035] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.326180] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:56.626 [2024-07-11 16:21:33.326303] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.330976] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:56.626 [2024-07-11 16:21:33.331139] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 passed 00:05:56.626 Test: test_zone_read ...[2024-07-11 16:21:33.331754] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:56.626 [2024-07-11 16:21:33.331879] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.332005] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:56.626 [2024-07-11 16:21:33.332112] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.332584] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:56.626 [2024-07-11 16:21:33.332706] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 passed 00:05:56.626 Test: test_close_zone ...passed 00:05:56.626 Test: test_finish_zone ...passed[2024-07-11 16:21:33.333442] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.333619] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.333950] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.334041] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.335094] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 [2024-07-11 16:21:33.335197] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.626 00:05:56.626 Test: test_append_zone ...[2024-07-11 16:21:33.335707] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:56.627 [2024-07-11 16:21:33.335834] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.627 [2024-07-11 16:21:33.335960] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:56.627 [2024-07-11 16:21:33.336060] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.627 [2024-07-11 16:21:33.345578] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:56.627 [2024-07-11 16:21:33.345750] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:56.627 passed 00:05:56.627 00:05:56.627 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.627 suites 1 1 n/a 0 0 00:05:56.627 tests 11 11 11 0 0 00:05:56.627 asserts 3437 3437 3437 0 n/a 00:05:56.627 00:05:56.627 Elapsed time = 0.028 seconds 00:05:56.627 16:21:33 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:56.627 00:05:56.627 00:05:56.627 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.627 http://cunit.sourceforge.net/ 00:05:56.627 00:05:56.627 00:05:56.627 Suite: bdev 00:05:56.885 Test: basic ...[2024-07-11 16:21:33.447412] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x557494ee4401): Operation not permitted (rc=-1) 00:05:56.885 [2024-07-11 16:21:33.448012] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x557494ee43c0): Operation not permitted (rc=-1) 00:05:56.885 [2024-07-11 16:21:33.448190] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x557494ee4401): Operation not permitted (rc=-1) 00:05:56.885 passed 00:05:56.885 Test: unregister_and_close ...passed 00:05:56.885 Test: unregister_and_close_different_threads ...passed 00:05:56.885 Test: basic_qos ...passed 00:05:57.143 Test: put_channel_during_reset ...passed 00:05:57.143 Test: aborted_reset ...passed 00:05:57.143 Test: aborted_reset_no_outstanding_io ...passed 00:05:57.143 Test: io_during_reset ...passed 00:05:57.143 Test: reset_completions ...passed 00:05:57.143 Test: io_during_qos_queue ...passed 00:05:57.401 Test: io_during_qos_reset ...passed 00:05:57.401 Test: enomem ...passed 00:05:57.401 Test: enomem_multi_bdev ...passed 00:05:57.401 Test: enomem_multi_bdev_unregister ...passed 00:05:57.401 Test: enomem_multi_io_target ...passed 00:05:57.401 Test: qos_dynamic_enable ...passed 00:05:57.659 Test: bdev_histograms_mt ...passed 00:05:57.659 Test: bdev_set_io_timeout_mt ...[2024-07-11 16:21:34.290295] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:57.659 passed 00:05:57.659 Test: lock_lba_range_then_submit_io ...[2024-07-11 16:21:34.310760] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x557494ee4380 already registered (old:0x6130000003c0 new:0x613000000c80) 00:05:57.659 passed 00:05:57.659 Test: unregister_during_reset ...passed 00:05:57.659 Test: event_notify_and_close ...passed 00:05:57.659 Test: unregister_and_qos_poller ...passed 00:05:57.659 Suite: bdev_wrong_thread 00:05:57.659 Test: spdk_bdev_register_wt ...[2024-07-11 16:21:34.462494] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:05:57.659 passed 00:05:57.659 Test: spdk_bdev_examine_wt ...[2024-07-11 16:21:34.463275] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:05:57.659 passed 00:05:57.659 00:05:57.659 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.659 suites 2 2 n/a 0 0 00:05:57.659 tests 24 24 24 0 0 00:05:57.659 asserts 621 621 621 0 n/a 00:05:57.659 00:05:57.659 Elapsed time = 1.036 seconds 00:05:57.917 ************************************ 00:05:57.917 END TEST unittest_bdev 00:05:57.917 ************************************ 00:05:57.917 00:05:57.917 real 0m4.353s 00:05:57.917 user 0m1.946s 00:05:57.917 sys 0m2.376s 00:05:57.917 16:21:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.917 16:21:34 -- common/autotest_common.sh@10 -- # set +x 00:05:57.917 16:21:34 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:57.917 16:21:34 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:57.917 16:21:34 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:57.917 16:21:34 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:57.917 16:21:34 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:57.918 16:21:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.918 16:21:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.918 16:21:34 -- common/autotest_common.sh@10 -- # set +x 00:05:57.918 ************************************ 00:05:57.918 START TEST unittest_bdev_raid5f 00:05:57.918 ************************************ 00:05:57.918 16:21:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:57.918 00:05:57.918 00:05:57.918 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.918 http://cunit.sourceforge.net/ 00:05:57.918 00:05:57.918 00:05:57.918 Suite: raid5f 00:05:57.918 Test: test_raid5f_start ...passed 00:05:58.485 Test: test_raid5f_submit_read_request ...passed 00:05:58.485 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:01.836 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:19.918 Test: test_raid5f_chunk_write_error ...passed 00:06:26.476 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:29.015 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:01.119 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:01.119 00:07:01.119 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.119 suites 1 1 n/a 0 0 00:07:01.119 tests 8 8 8 0 0 00:07:01.119 asserts 351864 351864 351864 0 n/a 00:07:01.119 00:07:01.119 Elapsed time = 59.132 seconds 00:07:01.119 ************************************ 00:07:01.119 END TEST unittest_bdev_raid5f 00:07:01.119 ************************************ 00:07:01.119 00:07:01.119 real 0m59.229s 00:07:01.119 user 0m56.318s 00:07:01.119 sys 0m2.880s 00:07:01.119 16:22:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.119 16:22:33 -- common/autotest_common.sh@10 -- # set +x 00:07:01.119 16:22:33 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:07:01.119 16:22:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:01.119 16:22:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.119 16:22:33 -- common/autotest_common.sh@10 -- # set +x 00:07:01.119 ************************************ 00:07:01.119 START TEST unittest_blob_blobfs 00:07:01.119 ************************************ 00:07:01.119 16:22:33 -- common/autotest_common.sh@1104 -- # unittest_blob 00:07:01.119 16:22:33 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:01.119 16:22:33 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:01.119 00:07:01.119 00:07:01.119 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.119 http://cunit.sourceforge.net/ 00:07:01.119 00:07:01.119 00:07:01.119 Suite: blob_nocopy_noextent 00:07:01.119 Test: blob_init ...[2024-07-11 16:22:33.872637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:01.119 passed 00:07:01.120 Test: blob_thin_provision ...passed 00:07:01.120 Test: blob_read_only ...passed 00:07:01.120 Test: bs_load ...[2024-07-11 16:22:33.980010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:01.120 passed 00:07:01.120 Test: bs_load_custom_cluster_size ...passed 00:07:01.120 Test: bs_load_after_failed_grow ...passed 00:07:01.120 Test: bs_cluster_sz ...[2024-07-11 16:22:34.018182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:01.120 [2024-07-11 16:22:34.018770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:01.120 [2024-07-11 16:22:34.019117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:01.120 passed 00:07:01.120 Test: bs_resize_md ...passed 00:07:01.120 Test: bs_destroy ...passed 00:07:01.120 Test: bs_type ...passed 00:07:01.120 Test: bs_super_block ...passed 00:07:01.120 Test: bs_test_recover_cluster_count ...passed 00:07:01.120 Test: bs_grow_live ...passed 00:07:01.120 Test: bs_grow_live_no_space ...passed 00:07:01.120 Test: bs_test_grow ...passed 00:07:01.120 Test: blob_serialize_test ...passed 00:07:01.120 Test: super_block_crc ...passed 00:07:01.120 Test: blob_thin_prov_write_count_io ...passed 00:07:01.120 Test: bs_load_iter_test ...passed 00:07:01.120 Test: blob_relations ...[2024-07-11 16:22:34.227133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.120 [2024-07-11 16:22:34.227497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 [2024-07-11 16:22:34.228579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.120 [2024-07-11 16:22:34.228835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 passed 00:07:01.120 Test: blob_relations2 ...[2024-07-11 16:22:34.247115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.120 [2024-07-11 16:22:34.247470] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 [2024-07-11 16:22:34.247552] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.120 [2024-07-11 16:22:34.247837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 [2024-07-11 16:22:34.249669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.120 [2024-07-11 16:22:34.249897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 [2024-07-11 16:22:34.250408] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.120 [2024-07-11 16:22:34.250593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 passed 00:07:01.120 Test: blob_relations3 ...passed 00:07:01.120 Test: blobstore_clean_power_failure ...passed 00:07:01.120 Test: blob_delete_snapshot_power_failure ...[2024-07-11 16:22:34.453221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:01.120 [2024-07-11 16:22:34.469521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:01.120 [2024-07-11 16:22:34.469902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:01.120 [2024-07-11 16:22:34.470014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 [2024-07-11 16:22:34.485966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:01.120 [2024-07-11 16:22:34.486306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:01.120 [2024-07-11 16:22:34.486412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:01.120 [2024-07-11 16:22:34.486674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 [2024-07-11 16:22:34.502658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:01.120 [2024-07-11 16:22:34.503033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 [2024-07-11 16:22:34.519134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:01.120 [2024-07-11 16:22:34.519446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 [2024-07-11 16:22:34.535535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:01.120 [2024-07-11 16:22:34.535875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 passed 00:07:01.120 Test: blob_create_snapshot_power_failure ...[2024-07-11 16:22:34.583065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:01.120 [2024-07-11 16:22:34.614324] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:01.120 [2024-07-11 16:22:34.630597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:01.120 passed 00:07:01.120 Test: blob_io_unit ...passed 00:07:01.120 Test: blob_io_unit_compatibility ...passed 00:07:01.120 Test: blob_ext_md_pages ...passed 00:07:01.120 Test: blob_esnap_io_4096_4096 ...passed 00:07:01.120 Test: blob_esnap_io_512_512 ...passed 00:07:01.120 Test: blob_esnap_io_4096_512 ...passed 00:07:01.120 Test: blob_esnap_io_512_4096 ...passed 00:07:01.120 Suite: blob_bs_nocopy_noextent 00:07:01.120 Test: blob_open ...passed 00:07:01.120 Test: blob_create ...[2024-07-11 16:22:34.937718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:01.120 passed 00:07:01.120 Test: blob_create_loop ...passed 00:07:01.120 Test: blob_create_fail ...[2024-07-11 16:22:35.055140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:01.120 passed 00:07:01.120 Test: blob_create_internal ...passed 00:07:01.120 Test: blob_create_zero_extent ...passed 00:07:01.120 Test: blob_snapshot ...passed 00:07:01.120 Test: blob_clone ...passed 00:07:01.120 Test: blob_inflate ...[2024-07-11 16:22:35.282531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:01.120 passed 00:07:01.120 Test: blob_delete ...passed 00:07:01.120 Test: blob_resize_test ...[2024-07-11 16:22:35.362586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:01.120 passed 00:07:01.120 Test: channel_ops ...passed 00:07:01.120 Test: blob_super ...passed 00:07:01.120 Test: blob_rw_verify_iov ...passed 00:07:01.120 Test: blob_unmap ...passed 00:07:01.120 Test: blob_iter ...passed 00:07:01.120 Test: blob_parse_md ...passed 00:07:01.120 Test: bs_load_pending_removal ...passed 00:07:01.120 Test: bs_unload ...[2024-07-11 16:22:35.686627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:01.120 passed 00:07:01.120 Test: bs_usable_clusters ...passed 00:07:01.120 Test: blob_crc ...[2024-07-11 16:22:35.767155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:01.120 [2024-07-11 16:22:35.767507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:01.120 passed 00:07:01.120 Test: blob_flags ...passed 00:07:01.120 Test: bs_version ...passed 00:07:01.120 Test: blob_set_xattrs_test ...[2024-07-11 16:22:35.894822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:01.120 [2024-07-11 16:22:35.895150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:01.120 passed 00:07:01.120 Test: blob_thin_prov_alloc ...passed 00:07:01.120 Test: blob_insert_cluster_msg_test ...passed 00:07:01.120 Test: blob_thin_prov_rw ...passed 00:07:01.120 Test: blob_thin_prov_rle ...passed 00:07:01.120 Test: blob_thin_prov_rw_iov ...passed 00:07:01.120 Test: blob_snapshot_rw ...passed 00:07:01.120 Test: blob_snapshot_rw_iov ...passed 00:07:01.120 Test: blob_inflate_rw ...passed 00:07:01.120 Test: blob_snapshot_freeze_io ...passed 00:07:01.120 Test: blob_operation_split_rw ...passed 00:07:01.120 Test: blob_operation_split_rw_iov ...passed 00:07:01.120 Test: blob_simultaneous_operations ...[2024-07-11 16:22:36.953605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.120 [2024-07-11 16:22:36.953921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 [2024-07-11 16:22:36.955312] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.120 [2024-07-11 16:22:36.955484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 [2024-07-11 16:22:36.967591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.120 [2024-07-11 16:22:36.967833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 [2024-07-11 16:22:36.968022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:01.120 [2024-07-11 16:22:36.968170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.120 passed 00:07:01.120 Test: blob_persist_test ...passed 00:07:01.120 Test: blob_decouple_snapshot ...passed 00:07:01.120 Test: blob_seek_io_unit ...passed 00:07:01.120 Test: blob_nested_freezes ...passed 00:07:01.120 Suite: blob_blob_nocopy_noextent 00:07:01.120 Test: blob_write ...passed 00:07:01.120 Test: blob_read ...passed 00:07:01.120 Test: blob_rw_verify ...passed 00:07:01.120 Test: blob_rw_verify_iov_nomem ...passed 00:07:01.120 Test: blob_rw_iov_read_only ...passed 00:07:01.120 Test: blob_xattr ...passed 00:07:01.120 Test: blob_dirty_shutdown ...passed 00:07:01.120 Test: blob_is_degraded ...passed 00:07:01.120 Suite: blob_esnap_bs_nocopy_noextent 00:07:01.120 Test: blob_esnap_create ...passed 00:07:01.120 Test: blob_esnap_thread_add_remove ...passed 00:07:01.120 Test: blob_esnap_clone_snapshot ...passed 00:07:01.120 Test: blob_esnap_clone_inflate ...passed 00:07:01.120 Test: blob_esnap_clone_decouple ...passed 00:07:01.120 Test: blob_esnap_clone_reload ...passed 00:07:01.120 Test: blob_esnap_hotplug ...passed 00:07:01.120 Suite: blob_nocopy_extent 00:07:01.120 Test: blob_init ...[2024-07-11 16:22:37.828612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:01.120 passed 00:07:01.120 Test: blob_thin_provision ...passed 00:07:01.120 Test: blob_read_only ...passed 00:07:01.121 Test: bs_load ...[2024-07-11 16:22:37.888998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:01.121 passed 00:07:01.121 Test: bs_load_custom_cluster_size ...passed 00:07:01.121 Test: bs_load_after_failed_grow ...passed 00:07:01.121 Test: bs_cluster_sz ...[2024-07-11 16:22:37.923075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:01.121 [2024-07-11 16:22:37.923499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:01.121 [2024-07-11 16:22:37.923682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:01.379 passed 00:07:01.379 Test: bs_resize_md ...passed 00:07:01.379 Test: bs_destroy ...passed 00:07:01.379 Test: bs_type ...passed 00:07:01.379 Test: bs_super_block ...passed 00:07:01.379 Test: bs_test_recover_cluster_count ...passed 00:07:01.379 Test: bs_grow_live ...passed 00:07:01.379 Test: bs_grow_live_no_space ...passed 00:07:01.379 Test: bs_test_grow ...passed 00:07:01.379 Test: blob_serialize_test ...passed 00:07:01.379 Test: super_block_crc ...passed 00:07:01.379 Test: blob_thin_prov_write_count_io ...passed 00:07:01.379 Test: bs_load_iter_test ...passed 00:07:01.379 Test: blob_relations ...[2024-07-11 16:22:38.116914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.379 [2024-07-11 16:22:38.117301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.379 [2024-07-11 16:22:38.118477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.380 [2024-07-11 16:22:38.118751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.380 passed 00:07:01.380 Test: blob_relations2 ...[2024-07-11 16:22:38.136160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.380 [2024-07-11 16:22:38.136489] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.380 [2024-07-11 16:22:38.136562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.380 [2024-07-11 16:22:38.136694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.380 [2024-07-11 16:22:38.138301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.380 [2024-07-11 16:22:38.138504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.380 [2024-07-11 16:22:38.139070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:01.380 [2024-07-11 16:22:38.139279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.380 passed 00:07:01.380 Test: blob_relations3 ...passed 00:07:01.638 Test: blobstore_clean_power_failure ...passed 00:07:01.638 Test: blob_delete_snapshot_power_failure ...[2024-07-11 16:22:38.328493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:01.638 [2024-07-11 16:22:38.343372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:01.639 [2024-07-11 16:22:38.358202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:01.639 [2024-07-11 16:22:38.358431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:01.639 [2024-07-11 16:22:38.358502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.639 [2024-07-11 16:22:38.373098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:01.639 [2024-07-11 16:22:38.373296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:01.639 [2024-07-11 16:22:38.373461] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:01.639 [2024-07-11 16:22:38.373645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.639 [2024-07-11 16:22:38.388152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:01.639 [2024-07-11 16:22:38.388378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:01.639 [2024-07-11 16:22:38.388455] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:01.639 [2024-07-11 16:22:38.388597] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.639 [2024-07-11 16:22:38.403957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:01.639 [2024-07-11 16:22:38.404229] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.639 [2024-07-11 16:22:38.419070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:01.639 [2024-07-11 16:22:38.419393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.639 [2024-07-11 16:22:38.434465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:01.639 [2024-07-11 16:22:38.434845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:01.898 passed 00:07:01.898 Test: blob_create_snapshot_power_failure ...[2024-07-11 16:22:38.480074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:01.898 [2024-07-11 16:22:38.494943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:01.898 [2024-07-11 16:22:38.523409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:01.898 [2024-07-11 16:22:38.538721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:01.898 passed 00:07:01.898 Test: blob_io_unit ...passed 00:07:01.898 Test: blob_io_unit_compatibility ...passed 00:07:01.898 Test: blob_ext_md_pages ...passed 00:07:01.898 Test: blob_esnap_io_4096_4096 ...passed 00:07:01.898 Test: blob_esnap_io_512_512 ...passed 00:07:02.156 Test: blob_esnap_io_4096_512 ...passed 00:07:02.157 Test: blob_esnap_io_512_4096 ...passed 00:07:02.157 Suite: blob_bs_nocopy_extent 00:07:02.157 Test: blob_open ...passed 00:07:02.157 Test: blob_create ...[2024-07-11 16:22:38.799641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:02.157 passed 00:07:02.157 Test: blob_create_loop ...passed 00:07:02.157 Test: blob_create_fail ...[2024-07-11 16:22:38.917988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:02.157 passed 00:07:02.415 Test: blob_create_internal ...passed 00:07:02.415 Test: blob_create_zero_extent ...passed 00:07:02.415 Test: blob_snapshot ...passed 00:07:02.415 Test: blob_clone ...passed 00:07:02.415 Test: blob_inflate ...[2024-07-11 16:22:39.128323] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:02.415 passed 00:07:02.415 Test: blob_delete ...passed 00:07:02.415 Test: blob_resize_test ...[2024-07-11 16:22:39.205777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:02.415 passed 00:07:02.673 Test: channel_ops ...passed 00:07:02.673 Test: blob_super ...passed 00:07:02.673 Test: blob_rw_verify_iov ...passed 00:07:02.673 Test: blob_unmap ...passed 00:07:02.673 Test: blob_iter ...passed 00:07:02.673 Test: blob_parse_md ...passed 00:07:02.931 Test: bs_load_pending_removal ...passed 00:07:02.931 Test: bs_unload ...[2024-07-11 16:22:39.506203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:02.931 passed 00:07:02.931 Test: bs_usable_clusters ...passed 00:07:02.931 Test: blob_crc ...[2024-07-11 16:22:39.590217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:02.931 [2024-07-11 16:22:39.590630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:02.931 passed 00:07:02.931 Test: blob_flags ...passed 00:07:02.931 Test: bs_version ...passed 00:07:02.931 Test: blob_set_xattrs_test ...[2024-07-11 16:22:39.703806] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:02.931 [2024-07-11 16:22:39.704140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:02.931 passed 00:07:03.187 Test: blob_thin_prov_alloc ...passed 00:07:03.187 Test: blob_insert_cluster_msg_test ...passed 00:07:03.187 Test: blob_thin_prov_rw ...passed 00:07:03.187 Test: blob_thin_prov_rle ...passed 00:07:03.187 Test: blob_thin_prov_rw_iov ...passed 00:07:03.443 Test: blob_snapshot_rw ...passed 00:07:03.443 Test: blob_snapshot_rw_iov ...passed 00:07:03.700 Test: blob_inflate_rw ...passed 00:07:03.700 Test: blob_snapshot_freeze_io ...passed 00:07:03.957 Test: blob_operation_split_rw ...passed 00:07:03.957 Test: blob_operation_split_rw_iov ...passed 00:07:03.957 Test: blob_simultaneous_operations ...[2024-07-11 16:22:40.713151] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:03.957 [2024-07-11 16:22:40.713431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.957 [2024-07-11 16:22:40.714658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:03.957 [2024-07-11 16:22:40.714819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.957 [2024-07-11 16:22:40.726246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:03.957 [2024-07-11 16:22:40.726430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.957 [2024-07-11 16:22:40.726677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:03.957 [2024-07-11 16:22:40.726826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:03.957 passed 00:07:04.214 Test: blob_persist_test ...passed 00:07:04.214 Test: blob_decouple_snapshot ...passed 00:07:04.214 Test: blob_seek_io_unit ...passed 00:07:04.214 Test: blob_nested_freezes ...passed 00:07:04.214 Suite: blob_blob_nocopy_extent 00:07:04.214 Test: blob_write ...passed 00:07:04.214 Test: blob_read ...passed 00:07:04.473 Test: blob_rw_verify ...passed 00:07:04.473 Test: blob_rw_verify_iov_nomem ...passed 00:07:04.473 Test: blob_rw_iov_read_only ...passed 00:07:04.473 Test: blob_xattr ...passed 00:07:04.473 Test: blob_dirty_shutdown ...passed 00:07:04.473 Test: blob_is_degraded ...passed 00:07:04.473 Suite: blob_esnap_bs_nocopy_extent 00:07:04.473 Test: blob_esnap_create ...passed 00:07:04.731 Test: blob_esnap_thread_add_remove ...passed 00:07:04.731 Test: blob_esnap_clone_snapshot ...passed 00:07:04.731 Test: blob_esnap_clone_inflate ...passed 00:07:04.731 Test: blob_esnap_clone_decouple ...passed 00:07:04.731 Test: blob_esnap_clone_reload ...passed 00:07:04.731 Test: blob_esnap_hotplug ...passed 00:07:04.731 Suite: blob_copy_noextent 00:07:04.731 Test: blob_init ...[2024-07-11 16:22:41.498481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:04.731 passed 00:07:04.731 Test: blob_thin_provision ...passed 00:07:04.731 Test: blob_read_only ...passed 00:07:04.990 Test: bs_load ...[2024-07-11 16:22:41.550355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:04.990 passed 00:07:04.990 Test: bs_load_custom_cluster_size ...passed 00:07:04.990 Test: bs_load_after_failed_grow ...passed 00:07:04.990 Test: bs_cluster_sz ...[2024-07-11 16:22:41.579492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:04.990 [2024-07-11 16:22:41.579743] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:04.990 [2024-07-11 16:22:41.579893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:04.990 passed 00:07:04.990 Test: bs_resize_md ...passed 00:07:04.990 Test: bs_destroy ...passed 00:07:04.990 Test: bs_type ...passed 00:07:04.990 Test: bs_super_block ...passed 00:07:04.990 Test: bs_test_recover_cluster_count ...passed 00:07:04.990 Test: bs_grow_live ...passed 00:07:04.990 Test: bs_grow_live_no_space ...passed 00:07:04.990 Test: bs_test_grow ...passed 00:07:04.990 Test: blob_serialize_test ...passed 00:07:04.990 Test: super_block_crc ...passed 00:07:04.990 Test: blob_thin_prov_write_count_io ...passed 00:07:04.990 Test: bs_load_iter_test ...passed 00:07:04.990 Test: blob_relations ...[2024-07-11 16:22:41.745089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:04.990 [2024-07-11 16:22:41.745449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.990 [2024-07-11 16:22:41.746054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:04.990 [2024-07-11 16:22:41.746234] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.990 passed 00:07:04.990 Test: blob_relations2 ...[2024-07-11 16:22:41.761254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:04.990 [2024-07-11 16:22:41.761544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.990 [2024-07-11 16:22:41.761622] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:04.990 [2024-07-11 16:22:41.761719] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.990 [2024-07-11 16:22:41.762669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:04.990 [2024-07-11 16:22:41.762863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.990 [2024-07-11 16:22:41.763306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:04.990 [2024-07-11 16:22:41.763492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:04.990 passed 00:07:04.990 Test: blob_relations3 ...passed 00:07:05.249 Test: blobstore_clean_power_failure ...passed 00:07:05.249 Test: blob_delete_snapshot_power_failure ...[2024-07-11 16:22:41.940475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:05.249 [2024-07-11 16:22:41.954757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:05.249 [2024-07-11 16:22:41.955041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:05.249 [2024-07-11 16:22:41.955108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.249 [2024-07-11 16:22:41.969876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:05.249 [2024-07-11 16:22:41.970130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:05.249 [2024-07-11 16:22:41.970199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:05.249 [2024-07-11 16:22:41.970315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.249 [2024-07-11 16:22:41.985017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:05.249 [2024-07-11 16:22:41.985367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.249 [2024-07-11 16:22:42.000094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:05.249 [2024-07-11 16:22:42.000393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.249 [2024-07-11 16:22:42.014846] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:05.249 [2024-07-11 16:22:42.015136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:05.249 passed 00:07:05.507 Test: blob_create_snapshot_power_failure ...[2024-07-11 16:22:42.056460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:05.507 [2024-07-11 16:22:42.084314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:05.507 [2024-07-11 16:22:42.098617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:05.507 passed 00:07:05.507 Test: blob_io_unit ...passed 00:07:05.507 Test: blob_io_unit_compatibility ...passed 00:07:05.507 Test: blob_ext_md_pages ...passed 00:07:05.507 Test: blob_esnap_io_4096_4096 ...passed 00:07:05.507 Test: blob_esnap_io_512_512 ...passed 00:07:05.507 Test: blob_esnap_io_4096_512 ...passed 00:07:05.507 Test: blob_esnap_io_512_4096 ...passed 00:07:05.507 Suite: blob_bs_copy_noextent 00:07:05.767 Test: blob_open ...passed 00:07:05.767 Test: blob_create ...[2024-07-11 16:22:42.370036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:05.767 passed 00:07:05.767 Test: blob_create_loop ...passed 00:07:05.767 Test: blob_create_fail ...[2024-07-11 16:22:42.472484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:05.767 passed 00:07:05.767 Test: blob_create_internal ...passed 00:07:05.767 Test: blob_create_zero_extent ...passed 00:07:06.027 Test: blob_snapshot ...passed 00:07:06.027 Test: blob_clone ...passed 00:07:06.027 Test: blob_inflate ...[2024-07-11 16:22:42.668249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:06.027 passed 00:07:06.027 Test: blob_delete ...passed 00:07:06.027 Test: blob_resize_test ...[2024-07-11 16:22:42.733639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:06.027 passed 00:07:06.027 Test: channel_ops ...passed 00:07:06.027 Test: blob_super ...passed 00:07:06.285 Test: blob_rw_verify_iov ...passed 00:07:06.285 Test: blob_unmap ...passed 00:07:06.285 Test: blob_iter ...passed 00:07:06.285 Test: blob_parse_md ...passed 00:07:06.285 Test: bs_load_pending_removal ...passed 00:07:06.285 Test: bs_unload ...[2024-07-11 16:22:42.998346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:06.285 passed 00:07:06.285 Test: bs_usable_clusters ...passed 00:07:06.285 Test: blob_crc ...[2024-07-11 16:22:43.063961] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:06.285 [2024-07-11 16:22:43.064347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:06.285 passed 00:07:06.542 Test: blob_flags ...passed 00:07:06.542 Test: bs_version ...passed 00:07:06.542 Test: blob_set_xattrs_test ...[2024-07-11 16:22:43.170902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:06.542 [2024-07-11 16:22:43.171279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:06.542 passed 00:07:06.542 Test: blob_thin_prov_alloc ...passed 00:07:06.798 Test: blob_insert_cluster_msg_test ...passed 00:07:06.798 Test: blob_thin_prov_rw ...passed 00:07:06.798 Test: blob_thin_prov_rle ...passed 00:07:06.798 Test: blob_thin_prov_rw_iov ...passed 00:07:06.798 Test: blob_snapshot_rw ...passed 00:07:06.798 Test: blob_snapshot_rw_iov ...passed 00:07:07.055 Test: blob_inflate_rw ...passed 00:07:07.330 Test: blob_snapshot_freeze_io ...passed 00:07:07.330 Test: blob_operation_split_rw ...passed 00:07:07.330 Test: blob_operation_split_rw_iov ...passed 00:07:07.588 Test: blob_simultaneous_operations ...[2024-07-11 16:22:44.163525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.588 [2024-07-11 16:22:44.163891] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.588 [2024-07-11 16:22:44.164416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.588 [2024-07-11 16:22:44.164587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.588 [2024-07-11 16:22:44.167261] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.588 [2024-07-11 16:22:44.167457] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.588 [2024-07-11 16:22:44.167584] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:07.588 [2024-07-11 16:22:44.167806] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:07.588 passed 00:07:07.588 Test: blob_persist_test ...passed 00:07:07.588 Test: blob_decouple_snapshot ...passed 00:07:07.588 Test: blob_seek_io_unit ...passed 00:07:07.588 Test: blob_nested_freezes ...passed 00:07:07.588 Suite: blob_blob_copy_noextent 00:07:07.588 Test: blob_write ...passed 00:07:07.588 Test: blob_read ...passed 00:07:07.846 Test: blob_rw_verify ...passed 00:07:07.846 Test: blob_rw_verify_iov_nomem ...passed 00:07:07.846 Test: blob_rw_iov_read_only ...passed 00:07:07.846 Test: blob_xattr ...passed 00:07:07.846 Test: blob_dirty_shutdown ...passed 00:07:07.846 Test: blob_is_degraded ...passed 00:07:07.846 Suite: blob_esnap_bs_copy_noextent 00:07:07.846 Test: blob_esnap_create ...passed 00:07:07.846 Test: blob_esnap_thread_add_remove ...passed 00:07:07.846 Test: blob_esnap_clone_snapshot ...passed 00:07:08.104 Test: blob_esnap_clone_inflate ...passed 00:07:08.104 Test: blob_esnap_clone_decouple ...passed 00:07:08.104 Test: blob_esnap_clone_reload ...passed 00:07:08.104 Test: blob_esnap_hotplug ...passed 00:07:08.104 Suite: blob_copy_extent 00:07:08.104 Test: blob_init ...[2024-07-11 16:22:44.787773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:08.104 passed 00:07:08.104 Test: blob_thin_provision ...passed 00:07:08.104 Test: blob_read_only ...passed 00:07:08.104 Test: bs_load ...[2024-07-11 16:22:44.844736] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:08.104 passed 00:07:08.104 Test: bs_load_custom_cluster_size ...passed 00:07:08.104 Test: bs_load_after_failed_grow ...passed 00:07:08.104 Test: bs_cluster_sz ...[2024-07-11 16:22:44.871738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:08.104 [2024-07-11 16:22:44.871987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:08.104 [2024-07-11 16:22:44.872148] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:08.104 passed 00:07:08.104 Test: bs_resize_md ...passed 00:07:08.104 Test: bs_destroy ...passed 00:07:08.363 Test: bs_type ...passed 00:07:08.363 Test: bs_super_block ...passed 00:07:08.363 Test: bs_test_recover_cluster_count ...passed 00:07:08.363 Test: bs_grow_live ...passed 00:07:08.363 Test: bs_grow_live_no_space ...passed 00:07:08.363 Test: bs_test_grow ...passed 00:07:08.363 Test: blob_serialize_test ...passed 00:07:08.363 Test: super_block_crc ...passed 00:07:08.363 Test: blob_thin_prov_write_count_io ...passed 00:07:08.363 Test: bs_load_iter_test ...passed 00:07:08.363 Test: blob_relations ...[2024-07-11 16:22:45.019163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.363 [2024-07-11 16:22:45.019484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.363 [2024-07-11 16:22:45.020467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.363 [2024-07-11 16:22:45.020697] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.363 passed 00:07:08.363 Test: blob_relations2 ...[2024-07-11 16:22:45.034055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.363 [2024-07-11 16:22:45.034325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.363 [2024-07-11 16:22:45.034415] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.363 [2024-07-11 16:22:45.034646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.363 [2024-07-11 16:22:45.035978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.363 [2024-07-11 16:22:45.036177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.363 [2024-07-11 16:22:45.036687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:08.363 [2024-07-11 16:22:45.036886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.363 passed 00:07:08.363 Test: blob_relations3 ...passed 00:07:08.363 Test: blobstore_clean_power_failure ...passed 00:07:08.621 Test: blob_delete_snapshot_power_failure ...[2024-07-11 16:22:45.179995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:08.621 [2024-07-11 16:22:45.191971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:08.621 [2024-07-11 16:22:45.203989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:08.621 [2024-07-11 16:22:45.204292] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:08.621 [2024-07-11 16:22:45.204390] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.621 [2024-07-11 16:22:45.219449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:08.621 [2024-07-11 16:22:45.219754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:08.621 [2024-07-11 16:22:45.219815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:08.621 [2024-07-11 16:22:45.219924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.621 [2024-07-11 16:22:45.231893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:08.621 [2024-07-11 16:22:45.232179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:08.621 [2024-07-11 16:22:45.232241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:08.621 [2024-07-11 16:22:45.232379] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.621 [2024-07-11 16:22:45.244296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:08.621 [2024-07-11 16:22:45.244629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.621 [2024-07-11 16:22:45.256104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:08.621 [2024-07-11 16:22:45.256420] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.621 [2024-07-11 16:22:45.268022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:08.621 [2024-07-11 16:22:45.268311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:08.621 passed 00:07:08.621 Test: blob_create_snapshot_power_failure ...[2024-07-11 16:22:45.302771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:08.621 [2024-07-11 16:22:45.314151] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:08.621 [2024-07-11 16:22:45.336387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:08.621 [2024-07-11 16:22:45.347815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:08.621 passed 00:07:08.621 Test: blob_io_unit ...passed 00:07:08.621 Test: blob_io_unit_compatibility ...passed 00:07:08.621 Test: blob_ext_md_pages ...passed 00:07:08.879 Test: blob_esnap_io_4096_4096 ...passed 00:07:08.879 Test: blob_esnap_io_512_512 ...passed 00:07:08.879 Test: blob_esnap_io_4096_512 ...passed 00:07:08.879 Test: blob_esnap_io_512_4096 ...passed 00:07:08.879 Suite: blob_bs_copy_extent 00:07:08.879 Test: blob_open ...passed 00:07:08.879 Test: blob_create ...[2024-07-11 16:22:45.569291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:08.879 passed 00:07:08.879 Test: blob_create_loop ...passed 00:07:08.879 Test: blob_create_fail ...[2024-07-11 16:22:45.659754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:08.879 passed 00:07:09.137 Test: blob_create_internal ...passed 00:07:09.137 Test: blob_create_zero_extent ...passed 00:07:09.137 Test: blob_snapshot ...passed 00:07:09.137 Test: blob_clone ...passed 00:07:09.137 Test: blob_inflate ...[2024-07-11 16:22:45.836568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:09.137 passed 00:07:09.137 Test: blob_delete ...passed 00:07:09.137 Test: blob_resize_test ...[2024-07-11 16:22:45.913064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:09.137 passed 00:07:09.395 Test: channel_ops ...passed 00:07:09.395 Test: blob_super ...passed 00:07:09.395 Test: blob_rw_verify_iov ...passed 00:07:09.395 Test: blob_unmap ...passed 00:07:09.395 Test: blob_iter ...passed 00:07:09.395 Test: blob_parse_md ...passed 00:07:09.395 Test: bs_load_pending_removal ...passed 00:07:09.395 Test: bs_unload ...[2024-07-11 16:22:46.201426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:09.653 passed 00:07:09.653 Test: bs_usable_clusters ...passed 00:07:09.653 Test: blob_crc ...[2024-07-11 16:22:46.274034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:09.653 [2024-07-11 16:22:46.274448] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:09.653 passed 00:07:09.653 Test: blob_flags ...passed 00:07:09.653 Test: bs_version ...passed 00:07:09.653 Test: blob_set_xattrs_test ...[2024-07-11 16:22:46.383920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:09.653 [2024-07-11 16:22:46.384308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:09.653 passed 00:07:09.911 Test: blob_thin_prov_alloc ...passed 00:07:09.911 Test: blob_insert_cluster_msg_test ...passed 00:07:09.911 Test: blob_thin_prov_rw ...passed 00:07:09.911 Test: blob_thin_prov_rle ...passed 00:07:09.911 Test: blob_thin_prov_rw_iov ...passed 00:07:09.911 Test: blob_snapshot_rw ...passed 00:07:10.169 Test: blob_snapshot_rw_iov ...passed 00:07:10.169 Test: blob_inflate_rw ...passed 00:07:10.427 Test: blob_snapshot_freeze_io ...passed 00:07:10.427 Test: blob_operation_split_rw ...passed 00:07:10.686 Test: blob_operation_split_rw_iov ...passed 00:07:10.686 Test: blob_simultaneous_operations ...[2024-07-11 16:22:47.270985] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:10.686 [2024-07-11 16:22:47.271379] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:10.686 [2024-07-11 16:22:47.271872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:10.686 [2024-07-11 16:22:47.272031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:10.686 [2024-07-11 16:22:47.274569] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:10.686 [2024-07-11 16:22:47.274747] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:10.686 [2024-07-11 16:22:47.274890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:10.686 [2024-07-11 16:22:47.275033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:10.686 passed 00:07:10.686 Test: blob_persist_test ...passed 00:07:10.686 Test: blob_decouple_snapshot ...passed 00:07:10.686 Test: blob_seek_io_unit ...passed 00:07:10.686 Test: blob_nested_freezes ...passed 00:07:10.686 Suite: blob_blob_copy_extent 00:07:10.686 Test: blob_write ...passed 00:07:10.686 Test: blob_read ...passed 00:07:10.945 Test: blob_rw_verify ...passed 00:07:10.945 Test: blob_rw_verify_iov_nomem ...passed 00:07:10.945 Test: blob_rw_iov_read_only ...passed 00:07:10.945 Test: blob_xattr ...passed 00:07:10.945 Test: blob_dirty_shutdown ...passed 00:07:10.945 Test: blob_is_degraded ...passed 00:07:10.945 Suite: blob_esnap_bs_copy_extent 00:07:10.945 Test: blob_esnap_create ...passed 00:07:10.945 Test: blob_esnap_thread_add_remove ...passed 00:07:11.204 Test: blob_esnap_clone_snapshot ...passed 00:07:11.204 Test: blob_esnap_clone_inflate ...passed 00:07:11.204 Test: blob_esnap_clone_decouple ...passed 00:07:11.204 Test: blob_esnap_clone_reload ...passed 00:07:11.204 Test: blob_esnap_hotplug ...passed 00:07:11.204 00:07:11.204 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.204 suites 16 16 n/a 0 0 00:07:11.204 tests 348 348 348 0 0 00:07:11.204 asserts 92605 92605 92605 0 n/a 00:07:11.204 00:07:11.204 Elapsed time = 13.892 seconds 00:07:11.204 16:22:47 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:11.204 00:07:11.204 00:07:11.204 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.204 http://cunit.sourceforge.net/ 00:07:11.204 00:07:11.204 00:07:11.204 Suite: blob_bdev 00:07:11.204 Test: create_bs_dev ...passed 00:07:11.204 Test: create_bs_dev_ro ...[2024-07-11 16:22:47.997483] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:11.204 passed 00:07:11.204 Test: create_bs_dev_rw ...passed 00:07:11.204 Test: claim_bs_dev ...[2024-07-11 16:22:47.998230] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:11.204 passed 00:07:11.204 Test: claim_bs_dev_ro ...passed 00:07:11.204 Test: deferred_destroy_refs ...passed 00:07:11.204 Test: deferred_destroy_channels ...passed 00:07:11.204 Test: deferred_destroy_threads ...passed 00:07:11.204 00:07:11.204 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.204 suites 1 1 n/a 0 0 00:07:11.204 tests 8 8 8 0 0 00:07:11.204 asserts 119 119 119 0 n/a 00:07:11.204 00:07:11.204 Elapsed time = 0.001 seconds 00:07:11.463 16:22:48 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:11.463 00:07:11.463 00:07:11.463 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.463 http://cunit.sourceforge.net/ 00:07:11.463 00:07:11.463 00:07:11.463 Suite: tree 00:07:11.463 Test: blobfs_tree_op_test ...passed 00:07:11.463 00:07:11.463 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.463 suites 1 1 n/a 0 0 00:07:11.463 tests 1 1 1 0 0 00:07:11.463 asserts 27 27 27 0 n/a 00:07:11.463 00:07:11.463 Elapsed time = 0.000 seconds 00:07:11.463 16:22:48 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:11.463 00:07:11.463 00:07:11.463 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.463 http://cunit.sourceforge.net/ 00:07:11.463 00:07:11.463 00:07:11.463 Suite: blobfs_async_ut 00:07:11.463 Test: fs_init ...passed 00:07:11.463 Test: fs_open ...passed 00:07:11.463 Test: fs_create ...passed 00:07:11.463 Test: fs_truncate ...passed 00:07:11.463 Test: fs_rename ...[2024-07-11 16:22:48.163622] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:11.463 passed 00:07:11.463 Test: fs_rw_async ...passed 00:07:11.463 Test: fs_writev_readv_async ...passed 00:07:11.463 Test: tree_find_buffer_ut ...passed 00:07:11.463 Test: channel_ops ...passed 00:07:11.463 Test: channel_ops_sync ...passed 00:07:11.463 00:07:11.463 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.463 suites 1 1 n/a 0 0 00:07:11.463 tests 10 10 10 0 0 00:07:11.463 asserts 292 292 292 0 n/a 00:07:11.463 00:07:11.463 Elapsed time = 0.148 seconds 00:07:11.463 16:22:48 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:11.463 00:07:11.463 00:07:11.463 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.463 http://cunit.sourceforge.net/ 00:07:11.463 00:07:11.463 00:07:11.463 Suite: blobfs_sync_ut 00:07:11.721 Test: cache_read_after_write ...[2024-07-11 16:22:48.324713] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1474:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:11.721 passed 00:07:11.721 Test: file_length ...passed 00:07:11.721 Test: append_write_to_extend_blob ...passed 00:07:11.721 Test: partial_buffer ...passed 00:07:11.721 Test: cache_write_null_buffer ...passed 00:07:11.721 Test: fs_create_sync ...passed 00:07:11.721 Test: fs_rename_sync ...passed 00:07:11.721 Test: cache_append_no_cache ...passed 00:07:11.721 Test: fs_delete_file_without_close ...passed 00:07:11.721 00:07:11.721 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.721 suites 1 1 n/a 0 0 00:07:11.722 tests 9 9 9 0 0 00:07:11.722 asserts 345 345 345 0 n/a 00:07:11.722 00:07:11.722 Elapsed time = 0.359 seconds 00:07:11.722 16:22:48 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:11.722 00:07:11.722 00:07:11.722 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.722 http://cunit.sourceforge.net/ 00:07:11.722 00:07:11.722 00:07:11.722 Suite: blobfs_bdev_ut 00:07:11.722 Test: spdk_blobfs_bdev_detect_test ...[2024-07-11 16:22:48.502245] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:11.722 passed 00:07:11.722 Test: spdk_blobfs_bdev_create_test ...[2024-07-11 16:22:48.503154] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:11.722 passed 00:07:11.722 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:11.722 00:07:11.722 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.722 suites 1 1 n/a 0 0 00:07:11.722 tests 3 3 3 0 0 00:07:11.722 asserts 9 9 9 0 n/a 00:07:11.722 00:07:11.722 Elapsed time = 0.001 seconds 00:07:11.722 00:07:11.722 real 0m14.672s 00:07:11.722 user 0m13.963s 00:07:11.722 ************************************ 00:07:11.722 END TEST unittest_blob_blobfs 00:07:11.722 ************************************ 00:07:11.722 sys 0m0.752s 00:07:11.722 16:22:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.722 16:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.981 16:22:48 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:07:11.981 16:22:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.981 16:22:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.981 16:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.981 ************************************ 00:07:11.981 START TEST unittest_event 00:07:11.981 ************************************ 00:07:11.981 16:22:48 -- common/autotest_common.sh@1104 -- # unittest_event 00:07:11.981 16:22:48 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:11.981 00:07:11.981 00:07:11.981 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.981 http://cunit.sourceforge.net/ 00:07:11.981 00:07:11.981 00:07:11.981 Suite: app_suite 00:07:11.981 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:11.981 options:app_ut: invalid option -- 'z' 00:07:11.981 00:07:11.981 -c, --config JSON config file (default none) 00:07:11.981 --json JSON config file (default none) 00:07:11.981 --json-ignore-init-errors 00:07:11.981 don't exit on invalid config entry 00:07:11.981 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:11.981 -g, --single-file-segments 00:07:11.981 force creating just one hugetlbfs file 00:07:11.981 -h, --help show this usage 00:07:11.981 -i, --shm-id shared memory ID (optional) 00:07:11.981 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:11.981 --lcores lcore to CPU mapping list. The list is in the format: 00:07:11.981 [<,lcores[@CPUs]>...] 00:07:11.981 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:11.981 Within the group, '-' is used for range separator, 00:07:11.981 ',' is used for single number separator. 00:07:11.981 '( )' can be omitted for single element group, 00:07:11.981 '@' can be omitted if cpus and lcores have the same value 00:07:11.981 -n, --mem-channels channel number of memory channels used for DPDK 00:07:11.981 -p, --main-core main (primary) core for DPDK 00:07:11.981 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:11.981 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:11.981 --disable-cpumask-locks Disable CPU core lock files. 00:07:11.981 --silence-noticelog disable notice level logging to stderr 00:07:11.981 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:11.981 -u, --no-pci disable PCI access 00:07:11.981 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:11.981 --max-delay maximum reactor delay (in microseconds) 00:07:11.981 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:11.981 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:11.981 -R, --huge-unlink unlink huge files after initialization 00:07:11.981 -v, --version print SPDK version 00:07:11.981 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:11.981 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:11.981 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:11.981 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:11.981 Tracepoints vary in size and can use more than one trace entry. 00:07:11.981 --rpcs-allowed comma-separated list of permitted RPCS 00:07:11.981 --env-context Opaque context for use of the env implementation 00:07:11.981 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:11.981 --no-huge run without using hugepages 00:07:11.981 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:11.981 -e, --tpoint-group [:] 00:07:11.981 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:11.981 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:11.981 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:11.981 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:11.981 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:11.981 app_ut: unrecognized option '--test-long-opt' 00:07:11.981 app_ut [options] 00:07:11.981 options: 00:07:11.981 -c, --config JSON config file (default none) 00:07:11.981 --json JSON config file (default none) 00:07:11.981 --json-ignore-init-errors 00:07:11.981 don't exit on invalid config entry 00:07:11.981 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:11.981 -g, --single-file-segments 00:07:11.981 force creating just one hugetlbfs file 00:07:11.981 -h, --help show this usage 00:07:11.981 -i, --shm-id shared memory ID (optional) 00:07:11.981 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:11.981 --lcores lcore to CPU mapping list. The list is in the format: 00:07:11.981 [<,lcores[@CPUs]>...] 00:07:11.981 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:11.981 Within the group, '-' is used for range separator, 00:07:11.981 ',' is used for single number separator. 00:07:11.981 '( )' can be omitted for single element group, 00:07:11.981 '@' can be omitted if cpus and lcores have the same value 00:07:11.981 -n, --mem-channels channel number of memory channels used for DPDK 00:07:11.981 -p, --main-core main (primary) core for DPDK 00:07:11.981 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:11.981 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:11.981 --disable-cpumask-locks Disable CPU core lock files. 00:07:11.981 --silence-noticelog disable notice level logging to stderr 00:07:11.981 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:11.981 -u, --no-pci disable PCI access 00:07:11.981 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:11.981 --max-delay maximum reactor delay (in microseconds) 00:07:11.981 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:11.981 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:11.981 -R, --huge-unlink unlink huge files after initialization 00:07:11.982 -v, --version print SPDK version 00:07:11.982 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:11.982 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:11.982 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:11.982 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:11.982 Tracepoints vary in size and can use more than one trace entry. 00:07:11.982 --rpcs-allowed comma-separated list of permitted RPCS 00:07:11.982 --env-context Opaque context for use of the env implementation 00:07:11.982 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:11.982 --no-huge run without using hugepages 00:07:11.982 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:11.982 -e, --tpoint-group [:] 00:07:11.982 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:11.982 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:11.982 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:11.982 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:11.982 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:11.982 [2024-07-11 16:22:48.584077] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:11.982 [2024-07-11 16:22:48.584504] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:11.982 app_ut [options] 00:07:11.982 options: 00:07:11.982 -c, --config JSON config file (default none) 00:07:11.982 --json JSON config file (default none) 00:07:11.982 --json-ignore-init-errors 00:07:11.982 don't exit on invalid config entry 00:07:11.982 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:11.982 -g, --single-file-segments 00:07:11.982 force creating just one hugetlbfs file 00:07:11.982 -h, --help show this usage 00:07:11.982 -i, --shm-id shared memory ID (optional) 00:07:11.982 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:07:11.982 --lcores lcore to CPU mapping list. The list is in the format: 00:07:11.982 [<,lcores[@CPUs]>...] 00:07:11.982 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:11.982 Within the group, '-' is used for range separator, 00:07:11.982 ',' is used for single number separator. 00:07:11.982 '( )' can be omitted for single element group, 00:07:11.982 '@' can be omitted if cpus and lcores have the same value 00:07:11.982 -n, --mem-channels channel number of memory channels used for DPDK 00:07:11.982 -p, --main-core main (primary) core for DPDK 00:07:11.982 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:11.982 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:11.982 --disable-cpumask-locks Disable CPU core lock files. 00:07:11.982 --silence-noticelog disable notice level logging to stderr 00:07:11.982 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:11.982 -u, --no-pci disable PCI access 00:07:11.982 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:11.982 --max-delay maximum reactor delay (in microseconds) 00:07:11.982 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:11.982 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:11.982 -R, --huge-unlink unlink huge files after initialization 00:07:11.982 -v, --version print SPDK version 00:07:11.982 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:11.982 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:11.982 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:11.982 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:07:11.982 Tracepoints vary in size and can use more than one trace entry. 00:07:11.982 --rpcs-allowed comma-separated list of permitted RPCS 00:07:11.982 --env-context Opaque context for use of the env implementation 00:07:11.982 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:11.982 --no-huge run without using hugepages 00:07:11.982 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:07:11.982 -e, --tpoint-group [:] 00:07:11.982 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:07:11.982 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:07:11.982 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:07:11.982 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:07:11.982 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:07:11.982 [2024-07-11 16:22:48.587865] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:11.982 passed 00:07:11.982 00:07:11.982 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.982 suites 1 1 n/a 0 0 00:07:11.982 tests 1 1 1 0 0 00:07:11.982 asserts 8 8 8 0 n/a 00:07:11.982 00:07:11.982 Elapsed time = 0.002 seconds 00:07:11.982 16:22:48 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:11.982 00:07:11.982 00:07:11.982 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.982 http://cunit.sourceforge.net/ 00:07:11.982 00:07:11.982 00:07:11.982 Suite: app_suite 00:07:11.982 Test: test_create_reactor ...passed 00:07:11.982 Test: test_init_reactors ...passed 00:07:11.982 Test: test_event_call ...passed 00:07:11.982 Test: test_schedule_thread ...passed 00:07:11.982 Test: test_reschedule_thread ...passed 00:07:11.982 Test: test_bind_thread ...passed 00:07:11.982 Test: test_for_each_reactor ...passed 00:07:11.982 Test: test_reactor_stats ...passed 00:07:11.982 Test: test_scheduler ...passed 00:07:11.982 Test: test_governor ...passed 00:07:11.982 00:07:11.982 Run Summary: Type Total Ran Passed Failed Inactive 00:07:11.982 suites 1 1 n/a 0 0 00:07:11.982 tests 10 10 10 0 0 00:07:11.982 asserts 344 344 344 0 n/a 00:07:11.982 00:07:11.982 Elapsed time = 0.014 seconds 00:07:11.982 00:07:11.982 real 0m0.096s 00:07:11.982 user 0m0.037s 00:07:11.982 sys 0m0.049s 00:07:11.982 16:22:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.982 16:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.982 ************************************ 00:07:11.982 END TEST unittest_event 00:07:11.982 ************************************ 00:07:11.982 16:22:48 -- unit/unittest.sh@233 -- # uname -s 00:07:11.982 16:22:48 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:07:11.982 16:22:48 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:07:11.982 16:22:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:11.982 16:22:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.982 16:22:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.982 ************************************ 00:07:11.983 START TEST unittest_ftl 00:07:11.983 ************************************ 00:07:11.983 16:22:48 -- common/autotest_common.sh@1104 -- # unittest_ftl 00:07:11.983 16:22:48 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:11.983 00:07:11.983 00:07:11.983 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.983 http://cunit.sourceforge.net/ 00:07:11.983 00:07:11.983 00:07:11.983 Suite: ftl_band_suite 00:07:11.983 Test: test_band_block_offset_from_addr_base ...passed 00:07:11.983 Test: test_band_block_offset_from_addr_offset ...passed 00:07:12.242 Test: test_band_addr_from_block_offset ...passed 00:07:12.242 Test: test_band_set_addr ...passed 00:07:12.242 Test: test_invalidate_addr ...passed 00:07:12.242 Test: test_next_xfer_addr ...passed 00:07:12.242 00:07:12.242 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.242 suites 1 1 n/a 0 0 00:07:12.242 tests 6 6 6 0 0 00:07:12.242 asserts 30356 30356 30356 0 n/a 00:07:12.242 00:07:12.242 Elapsed time = 0.174 seconds 00:07:12.242 16:22:48 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:12.242 00:07:12.242 00:07:12.242 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.242 http://cunit.sourceforge.net/ 00:07:12.242 00:07:12.242 00:07:12.242 Suite: ftl_bitmap 00:07:12.242 Test: test_ftl_bitmap_create ...[2024-07-11 16:22:48.976545] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:12.242 [2024-07-11 16:22:48.976985] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:12.242 passed 00:07:12.242 Test: test_ftl_bitmap_get ...passed 00:07:12.242 Test: test_ftl_bitmap_set ...passed 00:07:12.242 Test: test_ftl_bitmap_clear ...passed 00:07:12.242 Test: test_ftl_bitmap_find_first_set ...passed 00:07:12.242 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:12.242 Test: test_ftl_bitmap_count_set ...passed 00:07:12.242 00:07:12.242 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.242 suites 1 1 n/a 0 0 00:07:12.242 tests 7 7 7 0 0 00:07:12.242 asserts 137 137 137 0 n/a 00:07:12.242 00:07:12.242 Elapsed time = 0.001 seconds 00:07:12.242 16:22:48 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:12.242 00:07:12.242 00:07:12.242 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.242 http://cunit.sourceforge.net/ 00:07:12.242 00:07:12.242 00:07:12.242 Suite: ftl_io_suite 00:07:12.242 Test: test_completion ...passed 00:07:12.242 Test: test_multiple_ios ...passed 00:07:12.242 00:07:12.242 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.242 suites 1 1 n/a 0 0 00:07:12.242 tests 2 2 2 0 0 00:07:12.242 asserts 47 47 47 0 n/a 00:07:12.242 00:07:12.242 Elapsed time = 0.003 seconds 00:07:12.242 16:22:49 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:12.242 00:07:12.242 00:07:12.242 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.242 http://cunit.sourceforge.net/ 00:07:12.242 00:07:12.242 00:07:12.242 Suite: ftl_mngt 00:07:12.242 Test: test_next_step ...passed 00:07:12.242 Test: test_continue_step ...passed 00:07:12.242 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:12.242 Test: test_fail_step ...passed 00:07:12.242 Test: test_mngt_call_and_call_rollback ...passed 00:07:12.242 Test: test_nested_process_failure ...passed 00:07:12.242 00:07:12.242 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.242 suites 1 1 n/a 0 0 00:07:12.242 tests 6 6 6 0 0 00:07:12.242 asserts 176 176 176 0 n/a 00:07:12.242 00:07:12.243 Elapsed time = 0.002 seconds 00:07:12.502 16:22:49 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:12.502 00:07:12.502 00:07:12.502 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.502 http://cunit.sourceforge.net/ 00:07:12.502 00:07:12.502 00:07:12.502 Suite: ftl_mempool 00:07:12.502 Test: test_ftl_mempool_create ...passed 00:07:12.502 Test: test_ftl_mempool_get_put ...passed 00:07:12.502 00:07:12.502 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.502 suites 1 1 n/a 0 0 00:07:12.502 tests 2 2 2 0 0 00:07:12.502 asserts 36 36 36 0 n/a 00:07:12.502 00:07:12.502 Elapsed time = 0.000 seconds 00:07:12.502 16:22:49 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:12.502 00:07:12.502 00:07:12.502 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.502 http://cunit.sourceforge.net/ 00:07:12.502 00:07:12.502 00:07:12.502 Suite: ftl_addr64_suite 00:07:12.502 Test: test_addr_cached ...passed 00:07:12.502 00:07:12.502 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.502 suites 1 1 n/a 0 0 00:07:12.502 tests 1 1 1 0 0 00:07:12.502 asserts 1536 1536 1536 0 n/a 00:07:12.502 00:07:12.502 Elapsed time = 0.000 seconds 00:07:12.502 16:22:49 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:12.502 00:07:12.502 00:07:12.502 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.502 http://cunit.sourceforge.net/ 00:07:12.502 00:07:12.502 00:07:12.502 Suite: ftl_sb 00:07:12.502 Test: test_sb_crc_v2 ...passed 00:07:12.502 Test: test_sb_crc_v3 ...passed 00:07:12.502 Test: test_sb_v3_md_layout ...[2024-07-11 16:22:49.130099] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:12.502 [2024-07-11 16:22:49.130680] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:12.502 [2024-07-11 16:22:49.130906] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:12.502 [2024-07-11 16:22:49.131214] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:12.502 [2024-07-11 16:22:49.131530] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:12.502 [2024-07-11 16:22:49.131891] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:12.502 [2024-07-11 16:22:49.132194] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:12.502 [2024-07-11 16:22:49.132542] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:12.502 [2024-07-11 16:22:49.132983] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:12.502 [2024-07-11 16:22:49.133220] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:12.502 [2024-07-11 16:22:49.133633] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:12.502 passed 00:07:12.502 Test: test_sb_v5_md_layout ...passed 00:07:12.502 00:07:12.502 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.502 suites 1 1 n/a 0 0 00:07:12.502 tests 4 4 4 0 0 00:07:12.502 asserts 148 148 148 0 n/a 00:07:12.502 00:07:12.502 Elapsed time = 0.004 seconds 00:07:12.502 16:22:49 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:12.502 00:07:12.502 00:07:12.502 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.502 http://cunit.sourceforge.net/ 00:07:12.502 00:07:12.502 00:07:12.502 Suite: ftl_layout_upgrade 00:07:12.502 Test: test_l2p_upgrade ...passed 00:07:12.502 00:07:12.502 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.502 suites 1 1 n/a 0 0 00:07:12.502 tests 1 1 1 0 0 00:07:12.502 asserts 140 140 140 0 n/a 00:07:12.502 00:07:12.502 Elapsed time = 0.001 seconds 00:07:12.502 00:07:12.502 real 0m0.474s 00:07:12.502 user 0m0.247s 00:07:12.502 sys 0m0.218s 00:07:12.502 16:22:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.502 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.502 ************************************ 00:07:12.502 END TEST unittest_ftl 00:07:12.502 ************************************ 00:07:12.502 16:22:49 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:12.502 16:22:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.502 16:22:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.502 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.502 ************************************ 00:07:12.502 START TEST unittest_accel 00:07:12.502 ************************************ 00:07:12.503 16:22:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:12.503 00:07:12.503 00:07:12.503 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.503 http://cunit.sourceforge.net/ 00:07:12.503 00:07:12.503 00:07:12.503 Suite: accel_sequence 00:07:12.503 Test: test_sequence_fill_copy ...passed 00:07:12.503 Test: test_sequence_abort ...passed 00:07:12.503 Test: test_sequence_append_error ...passed 00:07:12.503 Test: test_sequence_completion_error ...[2024-07-11 16:22:49.255616] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f53667027c0 00:07:12.503 [2024-07-11 16:22:49.255967] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f53667027c0 00:07:12.503 [2024-07-11 16:22:49.256119] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f53667027c0 00:07:12.503 [2024-07-11 16:22:49.256266] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f53667027c0 00:07:12.503 passed 00:07:12.503 Test: test_sequence_decompress ...passed 00:07:12.503 Test: test_sequence_reverse ...passed 00:07:12.503 Test: test_sequence_copy_elision ...passed 00:07:12.503 Test: test_sequence_accel_buffers ...passed 00:07:12.503 Test: test_sequence_memory_domain ...[2024-07-11 16:22:49.266345] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:12.503 [2024-07-11 16:22:49.266677] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:12.503 passed 00:07:12.503 Test: test_sequence_module_memory_domain ...passed 00:07:12.503 Test: test_sequence_crypto ...passed 00:07:12.503 Test: test_sequence_driver ...[2024-07-11 16:22:49.273379] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f5365ada7c0 using driver: ut 00:07:12.503 [2024-07-11 16:22:49.273618] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f5365ada7c0 through driver: ut 00:07:12.503 passed 00:07:12.503 Test: test_sequence_same_iovs ...passed 00:07:12.503 Test: test_sequence_crc32 ...passed 00:07:12.503 Suite: accel 00:07:12.503 Test: test_spdk_accel_task_complete ...passed 00:07:12.503 Test: test_get_task ...passed 00:07:12.503 Test: test_spdk_accel_submit_copy ...passed 00:07:12.503 Test: test_spdk_accel_submit_dualcast ...[2024-07-11 16:22:49.279018] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:12.503 passed[2024-07-11 16:22:49.279128] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:12.503 00:07:12.503 Test: test_spdk_accel_submit_compare ...passed 00:07:12.503 Test: test_spdk_accel_submit_fill ...passed 00:07:12.503 Test: test_spdk_accel_submit_crc32c ...passed 00:07:12.503 Test: test_spdk_accel_submit_crc32cv ...passed 00:07:12.503 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:12.503 Test: test_spdk_accel_submit_xor ...passed 00:07:12.503 Test: test_spdk_accel_module_find_by_name ...passed 00:07:12.503 Test: test_spdk_accel_module_register ...passed 00:07:12.503 00:07:12.503 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.503 suites 2 2 n/a 0 0 00:07:12.503 tests 26 26 26 0 0 00:07:12.503 asserts 831 831 831 0 n/a 00:07:12.503 00:07:12.503 Elapsed time = 0.030 seconds 00:07:12.503 00:07:12.503 real 0m0.074s 00:07:12.503 user 0m0.047s 00:07:12.503 sys 0m0.022s 00:07:12.503 16:22:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.503 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.503 ************************************ 00:07:12.503 END TEST unittest_accel 00:07:12.503 ************************************ 00:07:12.762 16:22:49 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:12.762 16:22:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.762 16:22:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.762 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.762 ************************************ 00:07:12.762 START TEST unittest_ioat 00:07:12.762 ************************************ 00:07:12.763 16:22:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:12.763 00:07:12.763 00:07:12.763 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.763 http://cunit.sourceforge.net/ 00:07:12.763 00:07:12.763 00:07:12.763 Suite: ioat 00:07:12.763 Test: ioat_state_check ...passed 00:07:12.763 00:07:12.763 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.763 suites 1 1 n/a 0 0 00:07:12.763 tests 1 1 1 0 0 00:07:12.763 asserts 32 32 32 0 n/a 00:07:12.763 00:07:12.763 Elapsed time = 0.000 seconds 00:07:12.763 00:07:12.763 real 0m0.028s 00:07:12.763 user 0m0.012s 00:07:12.763 sys 0m0.016s 00:07:12.763 16:22:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.763 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.763 ************************************ 00:07:12.763 END TEST unittest_ioat 00:07:12.763 ************************************ 00:07:12.763 16:22:49 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:12.763 16:22:49 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:12.763 16:22:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.763 16:22:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.763 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.763 ************************************ 00:07:12.763 START TEST unittest_idxd_user 00:07:12.763 ************************************ 00:07:12.763 16:22:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:12.763 00:07:12.763 00:07:12.763 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.763 http://cunit.sourceforge.net/ 00:07:12.763 00:07:12.763 00:07:12.763 Suite: idxd_user 00:07:12.763 Test: test_idxd_wait_cmd ...[2024-07-11 16:22:49.449535] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:12.763 [2024-07-11 16:22:49.449914] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:12.763 passed 00:07:12.763 Test: test_idxd_reset_dev ...[2024-07-11 16:22:49.450302] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:12.763 [2024-07-11 16:22:49.450438] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:12.763 passed 00:07:12.763 Test: test_idxd_group_config ...passed 00:07:12.763 Test: test_idxd_wq_config ...passed 00:07:12.763 00:07:12.763 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.763 suites 1 1 n/a 0 0 00:07:12.763 tests 4 4 4 0 0 00:07:12.763 asserts 20 20 20 0 n/a 00:07:12.763 00:07:12.763 Elapsed time = 0.001 seconds 00:07:12.763 00:07:12.763 real 0m0.031s 00:07:12.763 user 0m0.016s 00:07:12.763 sys 0m0.015s 00:07:12.763 16:22:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.763 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.763 ************************************ 00:07:12.763 END TEST unittest_idxd_user 00:07:12.763 ************************************ 00:07:12.763 16:22:49 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:07:12.763 16:22:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:12.763 16:22:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.763 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:12.763 ************************************ 00:07:12.763 START TEST unittest_iscsi 00:07:12.763 ************************************ 00:07:12.763 16:22:49 -- common/autotest_common.sh@1104 -- # unittest_iscsi 00:07:12.763 16:22:49 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:12.763 00:07:12.763 00:07:12.763 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.763 http://cunit.sourceforge.net/ 00:07:12.763 00:07:12.763 00:07:12.763 Suite: conn_suite 00:07:12.763 Test: read_task_split_in_order_case ...passed 00:07:12.763 Test: read_task_split_reverse_order_case ...passed 00:07:12.763 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:12.763 Test: process_non_read_task_completion_test ...passed 00:07:12.763 Test: free_tasks_on_connection ...passed 00:07:12.763 Test: free_tasks_with_queued_datain ...passed 00:07:12.763 Test: abort_queued_datain_task_test ...passed 00:07:12.763 Test: abort_queued_datain_tasks_test ...passed 00:07:12.763 00:07:12.763 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.763 suites 1 1 n/a 0 0 00:07:12.763 tests 8 8 8 0 0 00:07:12.763 asserts 230 230 230 0 n/a 00:07:12.763 00:07:12.763 Elapsed time = 0.000 seconds 00:07:12.763 16:22:49 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:13.050 00:07:13.050 00:07:13.050 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.050 http://cunit.sourceforge.net/ 00:07:13.050 00:07:13.050 00:07:13.050 Suite: iscsi_suite 00:07:13.050 Test: param_negotiation_test ...passed 00:07:13.050 Test: list_negotiation_test ...passed 00:07:13.050 Test: parse_valid_test ...passed 00:07:13.050 Test: parse_invalid_test ...[2024-07-11 16:22:49.577773] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:13.050 [2024-07-11 16:22:49.578175] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:07:13.050 [2024-07-11 16:22:49.578338] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:07:13.050 [2024-07-11 16:22:49.578525] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:13.050 [2024-07-11 16:22:49.578772] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:13.050 [2024-07-11 16:22:49.578938] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:13.050 [2024-07-11 16:22:49.579181] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:13.050 passed 00:07:13.050 00:07:13.050 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.050 suites 1 1 n/a 0 0 00:07:13.050 tests 4 4 4 0 0 00:07:13.050 asserts 161 161 161 0 n/a 00:07:13.050 00:07:13.050 Elapsed time = 0.005 seconds 00:07:13.050 16:22:49 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:13.050 00:07:13.050 00:07:13.050 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.050 http://cunit.sourceforge.net/ 00:07:13.050 00:07:13.050 00:07:13.050 Suite: iscsi_target_node_suite 00:07:13.050 Test: add_lun_test_cases ...[2024-07-11 16:22:49.615848] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:13.050 [2024-07-11 16:22:49.616281] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:13.050 [2024-07-11 16:22:49.616539] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:13.050 [2024-07-11 16:22:49.616701] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:13.050 [2024-07-11 16:22:49.616846] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:13.050 passed 00:07:13.050 Test: allow_any_allowed ...passed 00:07:13.050 Test: allow_ipv6_allowed ...passed 00:07:13.050 Test: allow_ipv6_denied ...passed 00:07:13.050 Test: allow_ipv6_invalid ...passed 00:07:13.050 Test: allow_ipv4_allowed ...passed 00:07:13.050 Test: allow_ipv4_denied ...passed 00:07:13.050 Test: allow_ipv4_invalid ...passed 00:07:13.050 Test: node_access_allowed ...passed 00:07:13.050 Test: node_access_denied_by_empty_netmask ...passed 00:07:13.050 Test: node_access_multi_initiator_groups_cases ...passed 00:07:13.050 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:13.050 Test: chap_param_test_cases ...[2024-07-11 16:22:49.619627] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:13.050 [2024-07-11 16:22:49.619826] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:13.050 [2024-07-11 16:22:49.619991] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:13.050 [2024-07-11 16:22:49.620163] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:13.050 [2024-07-11 16:22:49.620364] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:13.050 passed 00:07:13.050 00:07:13.050 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.050 suites 1 1 n/a 0 0 00:07:13.050 tests 13 13 13 0 0 00:07:13.050 asserts 50 50 50 0 n/a 00:07:13.050 00:07:13.050 Elapsed time = 0.002 seconds 00:07:13.050 16:22:49 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:13.050 00:07:13.050 00:07:13.050 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.050 http://cunit.sourceforge.net/ 00:07:13.050 00:07:13.050 00:07:13.050 Suite: iscsi_suite 00:07:13.050 Test: op_login_check_target_test ...[2024-07-11 16:22:49.654730] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:07:13.050 passed 00:07:13.050 Test: op_login_session_normal_test ...[2024-07-11 16:22:49.655371] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:13.050 [2024-07-11 16:22:49.655546] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:13.050 [2024-07-11 16:22:49.655692] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:13.050 [2024-07-11 16:22:49.655856] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:13.050 [2024-07-11 16:22:49.656051] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:13.050 [2024-07-11 16:22:49.656352] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:13.050 [2024-07-11 16:22:49.656520] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:13.050 passed 00:07:13.050 Test: maxburstlength_test ...[2024-07-11 16:22:49.657133] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:13.050 [2024-07-11 16:22:49.657318] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:13.050 passed 00:07:13.050 Test: underflow_for_read_transfer_test ...passed 00:07:13.050 Test: underflow_for_zero_read_transfer_test ...passed 00:07:13.050 Test: underflow_for_request_sense_test ...passed 00:07:13.050 Test: underflow_for_check_condition_test ...passed 00:07:13.050 Test: add_transfer_task_test ...passed 00:07:13.050 Test: get_transfer_task_test ...passed 00:07:13.050 Test: del_transfer_task_test ...passed 00:07:13.050 Test: clear_all_transfer_tasks_test ...passed 00:07:13.050 Test: build_iovs_test ...passed 00:07:13.050 Test: build_iovs_with_md_test ...passed 00:07:13.050 Test: pdu_hdr_op_login_test ...[2024-07-11 16:22:49.660888] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:13.050 [2024-07-11 16:22:49.661159] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:13.050 [2024-07-11 16:22:49.661368] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:13.050 passed 00:07:13.050 Test: pdu_hdr_op_text_test ...[2024-07-11 16:22:49.661755] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:13.050 [2024-07-11 16:22:49.661952] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:13.050 [2024-07-11 16:22:49.662096] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:13.050 passed 00:07:13.050 Test: pdu_hdr_op_logout_test ...[2024-07-11 16:22:49.662435] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:13.050 passed 00:07:13.050 Test: pdu_hdr_op_scsi_test ...[2024-07-11 16:22:49.662867] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:13.050 [2024-07-11 16:22:49.662997] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:13.050 [2024-07-11 16:22:49.663169] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:13.050 [2024-07-11 16:22:49.663368] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:13.050 [2024-07-11 16:22:49.663557] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:13.050 [2024-07-11 16:22:49.663831] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:13.050 passed 00:07:13.051 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-11 16:22:49.664198] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:13.051 [2024-07-11 16:22:49.664427] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:13.051 passed 00:07:13.051 Test: pdu_hdr_op_nopout_test ...[2024-07-11 16:22:49.664924] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:13.051 [2024-07-11 16:22:49.665141] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:13.051 [2024-07-11 16:22:49.665269] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:13.051 [2024-07-11 16:22:49.665338] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:13.051 passed 00:07:13.051 Test: pdu_hdr_op_data_test ...[2024-07-11 16:22:49.665736] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:13.051 [2024-07-11 16:22:49.665915] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:13.051 [2024-07-11 16:22:49.666085] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:13.051 [2024-07-11 16:22:49.666269] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:13.051 [2024-07-11 16:22:49.666419] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:13.051 [2024-07-11 16:22:49.666612] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:13.051 [2024-07-11 16:22:49.666763] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:13.051 passed 00:07:13.051 Test: empty_text_with_cbit_test ...passed 00:07:13.051 Test: pdu_payload_read_test ...[2024-07-11 16:22:49.669394] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:13.051 passed 00:07:13.051 Test: data_out_pdu_sequence_test ...passed 00:07:13.051 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:13.051 00:07:13.051 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.051 suites 1 1 n/a 0 0 00:07:13.051 tests 24 24 24 0 0 00:07:13.051 asserts 150253 150253 150253 0 n/a 00:07:13.051 00:07:13.051 Elapsed time = 0.019 seconds 00:07:13.051 16:22:49 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:13.051 00:07:13.051 00:07:13.051 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.051 http://cunit.sourceforge.net/ 00:07:13.051 00:07:13.051 00:07:13.051 Suite: init_grp_suite 00:07:13.051 Test: create_initiator_group_success_case ...passed 00:07:13.051 Test: find_initiator_group_success_case ...passed 00:07:13.051 Test: register_initiator_group_twice_case ...passed 00:07:13.051 Test: add_initiator_name_success_case ...passed 00:07:13.051 Test: add_initiator_name_fail_case ...[2024-07-11 16:22:49.715232] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:13.051 passed 00:07:13.051 Test: delete_all_initiator_names_success_case ...passed 00:07:13.051 Test: add_netmask_success_case ...passed 00:07:13.051 Test: add_netmask_fail_case ...[2024-07-11 16:22:49.716131] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:13.051 passed 00:07:13.051 Test: delete_all_netmasks_success_case ...passed 00:07:13.051 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:13.051 Test: netmask_overwrite_all_to_any_case ...passed 00:07:13.051 Test: add_delete_initiator_names_case ...passed 00:07:13.051 Test: add_duplicated_initiator_names_case ...passed 00:07:13.051 Test: delete_nonexisting_initiator_names_case ...passed 00:07:13.051 Test: add_delete_netmasks_case ...passed 00:07:13.051 Test: add_duplicated_netmasks_case ...passed 00:07:13.051 Test: delete_nonexisting_netmasks_case ...passed 00:07:13.051 00:07:13.051 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.051 suites 1 1 n/a 0 0 00:07:13.051 tests 17 17 17 0 0 00:07:13.051 asserts 108 108 108 0 n/a 00:07:13.051 00:07:13.051 Elapsed time = 0.002 seconds 00:07:13.051 16:22:49 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:13.051 00:07:13.051 00:07:13.051 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.051 http://cunit.sourceforge.net/ 00:07:13.051 00:07:13.051 00:07:13.051 Suite: portal_grp_suite 00:07:13.051 Test: portal_create_ipv4_normal_case ...passed 00:07:13.051 Test: portal_create_ipv6_normal_case ...passed 00:07:13.051 Test: portal_create_ipv4_wildcard_case ...passed 00:07:13.051 Test: portal_create_ipv6_wildcard_case ...passed 00:07:13.051 Test: portal_create_twice_case ...[2024-07-11 16:22:49.756917] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:13.051 passed 00:07:13.051 Test: portal_grp_register_unregister_case ...passed 00:07:13.051 Test: portal_grp_register_twice_case ...passed 00:07:13.051 Test: portal_grp_add_delete_case ...passed 00:07:13.051 Test: portal_grp_add_delete_twice_case ...passed 00:07:13.051 00:07:13.051 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.051 suites 1 1 n/a 0 0 00:07:13.051 tests 9 9 9 0 0 00:07:13.051 asserts 44 44 44 0 n/a 00:07:13.051 00:07:13.051 Elapsed time = 0.004 seconds 00:07:13.051 00:07:13.051 real 0m0.261s 00:07:13.051 user 0m0.146s 00:07:13.051 sys 0m0.098s 00:07:13.051 16:22:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.051 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:13.051 ************************************ 00:07:13.051 END TEST unittest_iscsi 00:07:13.051 ************************************ 00:07:13.051 16:22:49 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:07:13.051 16:22:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.051 16:22:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.051 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:13.051 ************************************ 00:07:13.051 START TEST unittest_json 00:07:13.051 ************************************ 00:07:13.051 16:22:49 -- common/autotest_common.sh@1104 -- # unittest_json 00:07:13.051 16:22:49 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:13.051 00:07:13.051 00:07:13.051 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.051 http://cunit.sourceforge.net/ 00:07:13.051 00:07:13.051 00:07:13.051 Suite: json 00:07:13.051 Test: test_parse_literal ...passed 00:07:13.051 Test: test_parse_string_simple ...passed 00:07:13.051 Test: test_parse_string_control_chars ...passed 00:07:13.051 Test: test_parse_string_utf8 ...passed 00:07:13.051 Test: test_parse_string_escapes_twochar ...passed 00:07:13.051 Test: test_parse_string_escapes_unicode ...passed 00:07:13.051 Test: test_parse_number ...passed 00:07:13.051 Test: test_parse_array ...passed 00:07:13.051 Test: test_parse_object ...passed 00:07:13.051 Test: test_parse_nesting ...passed 00:07:13.051 Test: test_parse_comment ...passed 00:07:13.051 00:07:13.051 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.051 suites 1 1 n/a 0 0 00:07:13.051 tests 11 11 11 0 0 00:07:13.051 asserts 1516 1516 1516 0 n/a 00:07:13.051 00:07:13.051 Elapsed time = 0.002 seconds 00:07:13.309 16:22:49 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:13.309 00:07:13.309 00:07:13.309 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.309 http://cunit.sourceforge.net/ 00:07:13.309 00:07:13.309 00:07:13.309 Suite: json 00:07:13.309 Test: test_strequal ...passed 00:07:13.309 Test: test_num_to_uint16 ...passed 00:07:13.309 Test: test_num_to_int32 ...passed 00:07:13.309 Test: test_num_to_uint64 ...passed 00:07:13.309 Test: test_decode_object ...passed 00:07:13.309 Test: test_decode_array ...passed 00:07:13.309 Test: test_decode_bool ...passed 00:07:13.309 Test: test_decode_uint16 ...passed 00:07:13.309 Test: test_decode_int32 ...passed 00:07:13.309 Test: test_decode_uint32 ...passed 00:07:13.309 Test: test_decode_uint64 ...passed 00:07:13.309 Test: test_decode_string ...passed 00:07:13.309 Test: test_decode_uuid ...passed 00:07:13.309 Test: test_find ...passed 00:07:13.309 Test: test_find_array ...passed 00:07:13.309 Test: test_iterating ...passed 00:07:13.309 Test: test_free_object ...passed 00:07:13.309 00:07:13.309 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.309 suites 1 1 n/a 0 0 00:07:13.309 tests 17 17 17 0 0 00:07:13.309 asserts 236 236 236 0 n/a 00:07:13.309 00:07:13.309 Elapsed time = 0.001 seconds 00:07:13.309 16:22:49 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:13.310 00:07:13.310 00:07:13.310 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.310 http://cunit.sourceforge.net/ 00:07:13.310 00:07:13.310 00:07:13.310 Suite: json 00:07:13.310 Test: test_write_literal ...passed 00:07:13.310 Test: test_write_string_simple ...passed 00:07:13.310 Test: test_write_string_escapes ...passed 00:07:13.310 Test: test_write_string_utf16le ...passed 00:07:13.310 Test: test_write_number_int32 ...passed 00:07:13.310 Test: test_write_number_uint32 ...passed 00:07:13.310 Test: test_write_number_uint128 ...passed 00:07:13.310 Test: test_write_string_number_uint128 ...passed 00:07:13.310 Test: test_write_number_int64 ...passed 00:07:13.310 Test: test_write_number_uint64 ...passed 00:07:13.310 Test: test_write_number_double ...passed 00:07:13.310 Test: test_write_uuid ...passed 00:07:13.310 Test: test_write_array ...passed 00:07:13.310 Test: test_write_object ...passed 00:07:13.310 Test: test_write_nesting ...passed 00:07:13.310 Test: test_write_val ...passed 00:07:13.310 00:07:13.310 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.310 suites 1 1 n/a 0 0 00:07:13.310 tests 16 16 16 0 0 00:07:13.310 asserts 918 918 918 0 n/a 00:07:13.310 00:07:13.310 Elapsed time = 0.005 seconds 00:07:13.310 16:22:49 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:13.310 00:07:13.310 00:07:13.310 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.310 http://cunit.sourceforge.net/ 00:07:13.310 00:07:13.310 00:07:13.310 Suite: jsonrpc 00:07:13.310 Test: test_parse_request ...passed 00:07:13.310 Test: test_parse_request_streaming ...passed 00:07:13.310 00:07:13.310 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.310 suites 1 1 n/a 0 0 00:07:13.310 tests 2 2 2 0 0 00:07:13.310 asserts 289 289 289 0 n/a 00:07:13.310 00:07:13.310 Elapsed time = 0.003 seconds 00:07:13.310 00:07:13.310 real 0m0.142s 00:07:13.310 user 0m0.082s 00:07:13.310 sys 0m0.054s 00:07:13.310 16:22:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.310 16:22:49 -- common/autotest_common.sh@10 -- # set +x 00:07:13.310 ************************************ 00:07:13.310 END TEST unittest_json 00:07:13.310 ************************************ 00:07:13.310 16:22:50 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:07:13.310 16:22:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.310 16:22:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.310 16:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:13.310 ************************************ 00:07:13.310 START TEST unittest_rpc 00:07:13.310 ************************************ 00:07:13.310 16:22:50 -- common/autotest_common.sh@1104 -- # unittest_rpc 00:07:13.310 16:22:50 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:13.310 00:07:13.310 00:07:13.310 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.310 http://cunit.sourceforge.net/ 00:07:13.310 00:07:13.310 00:07:13.310 Suite: rpc 00:07:13.310 Test: test_jsonrpc_handler ...passed 00:07:13.310 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:13.310 Test: test_rpc_get_methods ...[2024-07-11 16:22:50.033264] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:13.310 passed 00:07:13.310 Test: test_rpc_spdk_get_version ...passed 00:07:13.310 Test: test_spdk_rpc_listen_close ...passed 00:07:13.310 00:07:13.310 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.310 suites 1 1 n/a 0 0 00:07:13.310 tests 5 5 5 0 0 00:07:13.310 asserts 20 20 20 0 n/a 00:07:13.310 00:07:13.310 Elapsed time = 0.000 seconds 00:07:13.310 ************************************ 00:07:13.310 END TEST unittest_rpc 00:07:13.310 ************************************ 00:07:13.310 00:07:13.310 real 0m0.032s 00:07:13.310 user 0m0.026s 00:07:13.310 sys 0m0.006s 00:07:13.310 16:22:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.310 16:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:13.310 16:22:50 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:13.310 16:22:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.310 16:22:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.310 16:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:13.310 ************************************ 00:07:13.310 START TEST unittest_notify 00:07:13.310 ************************************ 00:07:13.310 16:22:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:13.310 00:07:13.310 00:07:13.310 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.310 http://cunit.sourceforge.net/ 00:07:13.310 00:07:13.310 00:07:13.310 Suite: app_suite 00:07:13.310 Test: notify ...passed 00:07:13.310 00:07:13.310 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.310 suites 1 1 n/a 0 0 00:07:13.310 tests 1 1 1 0 0 00:07:13.310 asserts 13 13 13 0 n/a 00:07:13.310 00:07:13.310 Elapsed time = 0.000 seconds 00:07:13.569 ************************************ 00:07:13.569 END TEST unittest_notify 00:07:13.569 ************************************ 00:07:13.569 00:07:13.569 real 0m0.029s 00:07:13.569 user 0m0.016s 00:07:13.569 sys 0m0.012s 00:07:13.569 16:22:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.569 16:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 16:22:50 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:07:13.569 16:22:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:13.569 16:22:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.569 16:22:50 -- common/autotest_common.sh@10 -- # set +x 00:07:13.569 ************************************ 00:07:13.569 START TEST unittest_nvme 00:07:13.569 ************************************ 00:07:13.569 16:22:50 -- common/autotest_common.sh@1104 -- # unittest_nvme 00:07:13.569 16:22:50 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:13.569 00:07:13.569 00:07:13.569 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.569 http://cunit.sourceforge.net/ 00:07:13.569 00:07:13.569 00:07:13.569 Suite: nvme 00:07:13.569 Test: test_opc_data_transfer ...passed 00:07:13.569 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:13.569 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:13.569 Test: test_trid_parse_and_compare ...[2024-07-11 16:22:50.189341] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:13.569 [2024-07-11 16:22:50.189863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:13.569 [2024-07-11 16:22:50.190101] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:13.569 [2024-07-11 16:22:50.190265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:13.569 [2024-07-11 16:22:50.190406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:07:13.569 [2024-07-11 16:22:50.190549] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:13.569 passed 00:07:13.569 Test: test_trid_trtype_str ...passed 00:07:13.569 Test: test_trid_adrfam_str ...passed 00:07:13.569 Test: test_nvme_ctrlr_probe ...[2024-07-11 16:22:50.191512] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:13.569 passed 00:07:13.569 Test: test_spdk_nvme_probe ...[2024-07-11 16:22:50.191974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:13.569 [2024-07-11 16:22:50.192128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:13.569 [2024-07-11 16:22:50.192397] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:13.569 [2024-07-11 16:22:50.192579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:13.569 passed 00:07:13.569 Test: test_spdk_nvme_connect ...[2024-07-11 16:22:50.192983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:13.569 [2024-07-11 16:22:50.193468] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:13.569 [2024-07-11 16:22:50.193648] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:07:13.569 passed 00:07:13.569 Test: test_nvme_ctrlr_probe_internal ...[2024-07-11 16:22:50.194071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:13.569 [2024-07-11 16:22:50.194239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:13.569 passed 00:07:13.569 Test: test_nvme_init_controllers ...[2024-07-11 16:22:50.194634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:13.569 passed 00:07:13.569 Test: test_nvme_driver_init ...[2024-07-11 16:22:50.195035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:13.569 [2024-07-11 16:22:50.195190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:13.569 [2024-07-11 16:22:50.309074] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:13.569 [2024-07-11 16:22:50.309422] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:13.569 passed 00:07:13.569 Test: test_spdk_nvme_detach ...passed 00:07:13.569 Test: test_nvme_completion_poll_cb ...passed 00:07:13.569 Test: test_nvme_user_copy_cmd_complete ...passed 00:07:13.569 Test: test_nvme_allocate_request_null ...passed 00:07:13.569 Test: test_nvme_allocate_request ...passed 00:07:13.569 Test: test_nvme_free_request ...passed 00:07:13.569 Test: test_nvme_allocate_request_user_copy ...passed 00:07:13.569 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:13.569 Test: test_nvme_request_check_timeout ...passed 00:07:13.569 Test: test_nvme_wait_for_completion ...passed 00:07:13.569 Test: test_spdk_nvme_parse_func ...passed 00:07:13.569 Test: test_spdk_nvme_detach_async ...passed 00:07:13.569 Test: test_nvme_parse_addr ...[2024-07-11 16:22:50.313250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:13.569 passed 00:07:13.569 00:07:13.569 Run Summary: Type Total Ran Passed Failed Inactive 00:07:13.569 suites 1 1 n/a 0 0 00:07:13.569 tests 25 25 25 0 0 00:07:13.569 asserts 326 326 326 0 n/a 00:07:13.569 00:07:13.569 Elapsed time = 0.008 seconds 00:07:13.569 16:22:50 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:13.569 00:07:13.569 00:07:13.569 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.569 http://cunit.sourceforge.net/ 00:07:13.569 00:07:13.569 00:07:13.569 Suite: nvme_ctrlr 00:07:13.569 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-11 16:22:50.350142] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.569 passed 00:07:13.570 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-11 16:22:50.352226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.570 passed 00:07:13.570 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-11 16:22:50.353851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.570 passed 00:07:13.570 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-11 16:22:50.355463] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.570 passed 00:07:13.570 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-11 16:22:50.357093] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.570 [2024-07-11 16:22:50.358407] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-11 16:22:50.359812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-11 16:22:50.361107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:13.570 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-11 16:22:50.363974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.570 [2024-07-11 16:22:50.366310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-11 16:22:50.367658] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:13.570 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-11 16:22:50.370584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.829 [2024-07-11 16:22:50.371944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-11 16:22:50.374544] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:13.829 Test: test_nvme_ctrlr_init_delay ...[2024-07-11 16:22:50.377481] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.829 passed 00:07:13.829 Test: test_alloc_io_qpair_rr_1 ...[2024-07-11 16:22:50.379224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.829 [2024-07-11 16:22:50.379501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:13.829 [2024-07-11 16:22:50.379846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:13.829 [2024-07-11 16:22:50.380065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:13.829 [2024-07-11 16:22:50.380240] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:13.829 passed 00:07:13.829 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:07:13.829 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:13.829 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-11 16:22:50.381266] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.829 passed 00:07:13.829 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-11 16:22:50.381747] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:13.829 [2024-07-11 16:22:50.381995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:13.829 passed 00:07:13.829 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-11 16:22:50.382635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:13.829 [2024-07-11 16:22:50.382920] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:13.829 [2024-07-11 16:22:50.383149] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:13.829 [2024-07-11 16:22:50.383372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:13.829 passed 00:07:13.829 Test: test_nvme_ctrlr_fail ...[2024-07-11 16:22:50.383828] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:13.829 passed 00:07:13.829 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:07:13.829 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:13.829 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:13.829 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-11 16:22:50.385020] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.087 passed 00:07:14.087 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:14.087 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:14.087 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:14.087 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-11 16:22:50.706054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.087 passed 00:07:14.088 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-11 16:22:50.713426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-11 16:22:50.714882] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 [2024-07-11 16:22:50.715097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:14.088 passed 00:07:14.088 Test: test_alloc_io_qpair_fail ...[2024-07-11 16:22:50.716470] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 [2024-07-11 16:22:50.716638] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:14.088 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:14.088 Test: test_nvme_ctrlr_set_state ...[2024-07-11 16:22:50.717260] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-11 16:22:50.717644] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-11 16:22:50.742707] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-11 16:22:50.785401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_reset ...[2024-07-11 16:22:50.787422] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_aer_callback ...[2024-07-11 16:22:50.788087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-11 16:22:50.789909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:14.088 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:14.088 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-11 16:22:50.792818] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:14.088 Test: test_nvme_ctrlr_ana_resize ...[2024-07-11 16:22:50.794744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:14.088 Test: test_nvme_transport_ctrlr_ready ...[2024-07-11 16:22:50.796839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:14.088 [2024-07-11 16:22:50.797019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:07:14.088 passed 00:07:14.088 Test: test_nvme_ctrlr_disable ...[2024-07-11 16:22:50.797357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:14.088 passed 00:07:14.088 00:07:14.088 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.088 suites 1 1 n/a 0 0 00:07:14.088 tests 43 43 43 0 0 00:07:14.088 asserts 10418 10418 10418 0 n/a 00:07:14.088 00:07:14.088 Elapsed time = 0.395 seconds 00:07:14.088 16:22:50 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:14.088 00:07:14.088 00:07:14.088 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.088 http://cunit.sourceforge.net/ 00:07:14.088 00:07:14.088 00:07:14.088 Suite: nvme_ctrlr_cmd 00:07:14.088 Test: test_get_log_pages ...passed 00:07:14.088 Test: test_set_feature_cmd ...passed 00:07:14.088 Test: test_set_feature_ns_cmd ...passed 00:07:14.088 Test: test_get_feature_cmd ...passed 00:07:14.088 Test: test_get_feature_ns_cmd ...passed 00:07:14.088 Test: test_abort_cmd ...passed 00:07:14.088 Test: test_set_host_id_cmds ...[2024-07-11 16:22:50.841858] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:14.088 passed 00:07:14.088 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:14.088 Test: test_io_raw_cmd ...passed 00:07:14.088 Test: test_io_raw_cmd_with_md ...passed 00:07:14.088 Test: test_namespace_attach ...passed 00:07:14.088 Test: test_namespace_detach ...passed 00:07:14.088 Test: test_namespace_create ...passed 00:07:14.088 Test: test_namespace_delete ...passed 00:07:14.088 Test: test_doorbell_buffer_config ...passed 00:07:14.088 Test: test_format_nvme ...passed 00:07:14.088 Test: test_fw_commit ...passed 00:07:14.088 Test: test_fw_image_download ...passed 00:07:14.088 Test: test_sanitize ...passed 00:07:14.088 Test: test_directive ...passed 00:07:14.088 Test: test_nvme_request_add_abort ...passed 00:07:14.088 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:14.088 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:14.088 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:14.088 00:07:14.088 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.088 suites 1 1 n/a 0 0 00:07:14.088 tests 24 24 24 0 0 00:07:14.088 asserts 198 198 198 0 n/a 00:07:14.088 00:07:14.088 Elapsed time = 0.001 seconds 00:07:14.088 16:22:50 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:14.088 00:07:14.088 00:07:14.088 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.088 http://cunit.sourceforge.net/ 00:07:14.088 00:07:14.088 00:07:14.088 Suite: nvme_ctrlr_cmd 00:07:14.088 Test: test_geometry_cmd ...passed 00:07:14.088 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:14.088 00:07:14.088 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.088 suites 1 1 n/a 0 0 00:07:14.088 tests 2 2 2 0 0 00:07:14.088 asserts 7 7 7 0 n/a 00:07:14.088 00:07:14.088 Elapsed time = 0.000 seconds 00:07:14.088 16:22:50 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:14.347 00:07:14.347 00:07:14.347 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.347 http://cunit.sourceforge.net/ 00:07:14.347 00:07:14.347 00:07:14.347 Suite: nvme 00:07:14.347 Test: test_nvme_ns_construct ...passed 00:07:14.347 Test: test_nvme_ns_uuid ...passed 00:07:14.347 Test: test_nvme_ns_csi ...passed 00:07:14.347 Test: test_nvme_ns_data ...passed 00:07:14.347 Test: test_nvme_ns_set_identify_data ...passed 00:07:14.347 Test: test_spdk_nvme_ns_get_values ...passed 00:07:14.347 Test: test_spdk_nvme_ns_is_active ...passed 00:07:14.347 Test: spdk_nvme_ns_supports ...passed 00:07:14.347 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:14.347 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:14.347 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:14.347 Test: test_nvme_ns_find_id_desc ...passed 00:07:14.347 00:07:14.347 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.347 suites 1 1 n/a 0 0 00:07:14.347 tests 12 12 12 0 0 00:07:14.347 asserts 83 83 83 0 n/a 00:07:14.347 00:07:14.347 Elapsed time = 0.001 seconds 00:07:14.347 16:22:50 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:14.347 00:07:14.347 00:07:14.347 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.347 http://cunit.sourceforge.net/ 00:07:14.347 00:07:14.347 00:07:14.347 Suite: nvme_ns_cmd 00:07:14.347 Test: split_test ...passed 00:07:14.347 Test: split_test2 ...passed 00:07:14.347 Test: split_test3 ...passed 00:07:14.347 Test: split_test4 ...passed 00:07:14.347 Test: test_nvme_ns_cmd_flush ...passed 00:07:14.347 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:14.347 Test: test_nvme_ns_cmd_copy ...passed 00:07:14.347 Test: test_io_flags ...[2024-07-11 16:22:50.949707] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:14.347 passed 00:07:14.347 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:14.347 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:14.347 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:14.347 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:14.347 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:14.347 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:14.347 Test: test_cmd_child_request ...passed 00:07:14.347 Test: test_nvme_ns_cmd_readv ...passed 00:07:14.348 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:14.348 Test: test_nvme_ns_cmd_writev ...[2024-07-11 16:22:50.953871] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:14.348 passed 00:07:14.348 Test: test_nvme_ns_cmd_write_with_md ...passed 00:07:14.348 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:14.348 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:14.348 Test: test_nvme_ns_cmd_comparev ...passed 00:07:14.348 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:14.348 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:14.348 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:14.348 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:14.348 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:14.348 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-11 16:22:50.958888] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:14.348 passed 00:07:14.348 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-11 16:22:50.959398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:14.348 passed 00:07:14.348 Test: test_nvme_ns_cmd_verify ...passed 00:07:14.348 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:07:14.348 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:14.348 00:07:14.348 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.348 suites 1 1 n/a 0 0 00:07:14.348 tests 32 32 32 0 0 00:07:14.348 asserts 550 550 550 0 n/a 00:07:14.348 00:07:14.348 Elapsed time = 0.007 seconds 00:07:14.348 16:22:50 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:14.348 00:07:14.348 00:07:14.348 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.348 http://cunit.sourceforge.net/ 00:07:14.348 00:07:14.348 00:07:14.348 Suite: nvme_ns_cmd 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:14.348 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:14.348 00:07:14.348 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.348 suites 1 1 n/a 0 0 00:07:14.348 tests 12 12 12 0 0 00:07:14.348 asserts 123 123 123 0 n/a 00:07:14.348 00:07:14.348 Elapsed time = 0.001 seconds 00:07:14.348 16:22:51 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:14.348 00:07:14.348 00:07:14.348 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.348 http://cunit.sourceforge.net/ 00:07:14.348 00:07:14.348 00:07:14.348 Suite: nvme_qpair 00:07:14.348 Test: test3 ...passed 00:07:14.348 Test: test_ctrlr_failed ...passed 00:07:14.348 Test: struct_packing ...passed 00:07:14.348 Test: test_nvme_qpair_process_completions ...[2024-07-11 16:22:51.021982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:14.348 [2024-07-11 16:22:51.022433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:14.348 [2024-07-11 16:22:51.022632] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:14.348 [2024-07-11 16:22:51.022825] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:14.348 passed 00:07:14.348 Test: test_nvme_completion_is_retry ...passed 00:07:14.348 Test: test_get_status_string ...passed 00:07:14.348 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:07:14.348 Test: test_nvme_qpair_submit_request ...passed 00:07:14.348 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:14.348 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:14.348 Test: test_nvme_qpair_init_deinit ...[2024-07-11 16:22:51.024666] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:14.348 passed 00:07:14.348 Test: test_nvme_get_sgl_print_info ...passed 00:07:14.348 00:07:14.348 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.348 suites 1 1 n/a 0 0 00:07:14.348 tests 12 12 12 0 0 00:07:14.348 asserts 154 154 154 0 n/a 00:07:14.348 00:07:14.348 Elapsed time = 0.002 seconds 00:07:14.348 16:22:51 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:14.348 00:07:14.348 00:07:14.348 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.348 http://cunit.sourceforge.net/ 00:07:14.348 00:07:14.348 00:07:14.348 Suite: nvme_pcie 00:07:14.348 Test: test_prp_list_append ...[2024-07-11 16:22:51.060643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:14.348 [2024-07-11 16:22:51.061276] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:14.348 [2024-07-11 16:22:51.061527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:14.348 [2024-07-11 16:22:51.062104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:14.348 [2024-07-11 16:22:51.062459] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:14.348 passed 00:07:14.348 Test: test_nvme_pcie_hotplug_monitor ...passed 00:07:14.348 Test: test_shadow_doorbell_update ...passed 00:07:14.348 Test: test_build_contig_hw_sgl_request ...passed 00:07:14.348 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:14.348 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:14.348 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:14.348 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-11 16:22:51.065796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:14.348 passed 00:07:14.348 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:14.348 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:07:14.348 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-07-11 16:22:51.067067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:14.348 passed 00:07:14.348 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-11 16:22:51.068033] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:14.348 passed 00:07:14.348 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-11 16:22:51.068700] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:14.348 passed 00:07:14.348 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-11 16:22:51.069457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:14.348 passed 00:07:14.348 00:07:14.348 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.348 suites 1 1 n/a 0 0 00:07:14.348 tests 14 14 14 0 0 00:07:14.348 asserts 235 235 235 0 n/a 00:07:14.348 00:07:14.349 Elapsed time = 0.004 seconds 00:07:14.349 16:22:51 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:14.349 00:07:14.349 00:07:14.349 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.349 http://cunit.sourceforge.net/ 00:07:14.349 00:07:14.349 00:07:14.349 Suite: nvme_ns_cmd 00:07:14.349 Test: nvme_poll_group_create_test ...passed 00:07:14.349 Test: nvme_poll_group_add_remove_test ...passed 00:07:14.349 Test: nvme_poll_group_process_completions ...passed 00:07:14.349 Test: nvme_poll_group_destroy_test ...passed 00:07:14.349 Test: nvme_poll_group_get_free_stats ...passed 00:07:14.349 00:07:14.349 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.349 suites 1 1 n/a 0 0 00:07:14.349 tests 5 5 5 0 0 00:07:14.349 asserts 75 75 75 0 n/a 00:07:14.349 00:07:14.349 Elapsed time = 0.000 seconds 00:07:14.349 16:22:51 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:14.349 00:07:14.349 00:07:14.349 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.349 http://cunit.sourceforge.net/ 00:07:14.349 00:07:14.349 00:07:14.349 Suite: nvme_quirks 00:07:14.349 Test: test_nvme_quirks_striping ...passed 00:07:14.349 00:07:14.349 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.349 suites 1 1 n/a 0 0 00:07:14.349 tests 1 1 1 0 0 00:07:14.349 asserts 5 5 5 0 n/a 00:07:14.349 00:07:14.349 Elapsed time = 0.000 seconds 00:07:14.349 16:22:51 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:14.608 00:07:14.608 00:07:14.608 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.608 http://cunit.sourceforge.net/ 00:07:14.608 00:07:14.608 00:07:14.608 Suite: nvme_tcp 00:07:14.608 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:14.608 Test: test_nvme_tcp_build_iovs ...passed 00:07:14.608 Test: test_nvme_tcp_build_sgl_request ...[2024-07-11 16:22:51.160631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffdd08ab090, and the iovcnt=16, remaining_size=28672 00:07:14.608 passed 00:07:14.608 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:14.608 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:14.608 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:14.608 Test: test_nvme_tcp_req_get ...passed 00:07:14.608 Test: test_nvme_tcp_req_init ...passed 00:07:14.608 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:14.608 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:14.608 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:07:14.608 Test: test_nvme_tcp_alloc_reqs ...[2024-07-11 16:22:51.161366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08acdb0 is same with the state(6) to be set 00:07:14.608 passed 00:07:14.608 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-11 16:22:51.161670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08abf40 is same with the state(5) to be set 00:07:14.608 passed 00:07:14.608 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-11 16:22:51.161723] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffdd08aca70 00:07:14.608 [2024-07-11 16:22:51.161767] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:14.608 [2024-07-11 16:22:51.161843] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08ac400 is same with the state(5) to be set 00:07:14.608 [2024-07-11 16:22:51.161891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:14.608 [2024-07-11 16:22:51.161958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08ac400 is same with the state(5) to be set 00:07:14.608 [2024-07-11 16:22:51.161991] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:14.608 [2024-07-11 16:22:51.162013] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08ac400 is same with the state(5) to be set 00:07:14.608 [2024-07-11 16:22:51.162045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08ac400 is same with the state(5) to be set 00:07:14.608 [2024-07-11 16:22:51.162084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08ac400 is same with the state(5) to be set 00:07:14.608 [2024-07-11 16:22:51.162143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08ac400 is same with the state(5) to be set 00:07:14.608 [2024-07-11 16:22:51.162172] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08ac400 is same with the state(5) to be set 00:07:14.608 passed 00:07:14.608 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-11 16:22:51.162207] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08ac400 is same with the state(5) to be set 00:07:14.609 [2024-07-11 16:22:51.162348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:14.609 [2024-07-11 16:22:51.162386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:14.609 passed 00:07:14.609 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:07:14.609 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-11 16:22:51.162604] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:14.609 passed 00:07:14.609 Test: test_nvme_tcp_icresp_handle ...[2024-07-11 16:22:51.162700] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffdd08ac5b0): PDU Sequence Error 00:07:14.609 [2024-07-11 16:22:51.162796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:14.609 [2024-07-11 16:22:51.162836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:14.609 [2024-07-11 16:22:51.162867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08abf50 is same with the state(5) to be set 00:07:14.609 [2024-07-11 16:22:51.162898] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:14.609 passed 00:07:14.609 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:07:14.609 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:07:14.609 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:07:14.609 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:07:14.609 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-11 16:22:51.162929] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08abf50 is same with the state(5) to be set 00:07:14.609 [2024-07-11 16:22:51.162970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08abf50 is same with the state(0) to be set 00:07:14.609 [2024-07-11 16:22:51.163015] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffdd08aca70): PDU Sequence Error 00:07:14.609 [2024-07-11 16:22:51.163087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffdd08ab230 00:07:14.609 [2024-07-11 16:22:51.163211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffdd08aa8b0, errno=0, rc=0 00:07:14.609 [2024-07-11 16:22:51.163256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08aa8b0 is same with the state(5) to be set 00:07:14.609 [2024-07-11 16:22:51.163312] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdd08aa8b0 is same with the state(5) to be set 00:07:14.609 [2024-07-11 16:22:51.163362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffdd08aa8b0 (0): Success 00:07:14.609 [2024-07-11 16:22:51.163396] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffdd08aa8b0 (0): Success 00:07:14.609 [2024-07-11 16:22:51.275660] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:14.609 [2024-07-11 16:22:51.275780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:14.609 passed 00:07:14.609 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:07:14.609 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:07:14.609 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-11 16:22:51.275978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:14.609 [2024-07-11 16:22:51.276019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:14.609 [2024-07-11 16:22:51.276212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:14.609 [2024-07-11 16:22:51.276253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:14.609 [2024-07-11 16:22:51.276366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:14.609 [2024-07-11 16:22:51.276419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:14.609 passed 00:07:14.609 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-11 16:22:51.276520] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:07:14.609 [2024-07-11 16:22:51.276582] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:14.609 [2024-07-11 16:22:51.276723] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:07:14.609 [2024-07-11 16:22:51.276759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:14.609 passed 00:07:14.609 00:07:14.609 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.609 suites 1 1 n/a 0 0 00:07:14.609 tests 27 27 27 0 0 00:07:14.609 asserts 624 624 624 0 n/a 00:07:14.609 00:07:14.609 Elapsed time = 0.116 seconds 00:07:14.609 16:22:51 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:14.609 00:07:14.609 00:07:14.609 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.609 http://cunit.sourceforge.net/ 00:07:14.609 00:07:14.609 00:07:14.609 Suite: nvme_transport 00:07:14.609 Test: test_nvme_get_transport ...passed 00:07:14.609 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:14.609 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:14.609 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:14.609 Test: test_ctrlr_get_memory_domains ...passed 00:07:14.609 00:07:14.609 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.609 suites 1 1 n/a 0 0 00:07:14.609 tests 5 5 5 0 0 00:07:14.609 asserts 28 28 28 0 n/a 00:07:14.609 00:07:14.609 Elapsed time = 0.000 seconds 00:07:14.609 16:22:51 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:14.609 00:07:14.609 00:07:14.609 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.609 http://cunit.sourceforge.net/ 00:07:14.609 00:07:14.609 00:07:14.609 Suite: nvme_io_msg 00:07:14.609 Test: test_nvme_io_msg_send ...passed 00:07:14.609 Test: test_nvme_io_msg_process ...passed 00:07:14.609 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:14.609 00:07:14.609 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.609 suites 1 1 n/a 0 0 00:07:14.609 tests 3 3 3 0 0 00:07:14.609 asserts 56 56 56 0 n/a 00:07:14.609 00:07:14.609 Elapsed time = 0.000 seconds 00:07:14.609 16:22:51 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:14.609 00:07:14.609 00:07:14.609 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.609 http://cunit.sourceforge.net/ 00:07:14.609 00:07:14.609 00:07:14.609 Suite: nvme_pcie_common 00:07:14.609 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:07:14.609 Test: test_nvme_pcie_qpair_construct_destroy ...[2024-07-11 16:22:51.364079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:14.609 passed 00:07:14.609 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:07:14.609 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-11 16:22:51.364883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:14.609 [2024-07-11 16:22:51.365010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:14.609 [2024-07-11 16:22:51.365045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:14.609 passed 00:07:14.609 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:07:14.609 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:07:14.609 00:07:14.609 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.609 suites 1 1 n/a 0 0 00:07:14.609 tests 6 6 6 0 0 00:07:14.610 asserts 148 148 148 0 n/a 00:07:14.610 00:07:14.610 Elapsed time = 0.001 seconds 00:07:14.610 [2024-07-11 16:22:51.365426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:14.610 [2024-07-11 16:22:51.365466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:14.610 16:22:51 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:14.610 00:07:14.610 00:07:14.610 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.610 http://cunit.sourceforge.net/ 00:07:14.610 00:07:14.610 00:07:14.610 Suite: nvme_fabric 00:07:14.610 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:14.610 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:14.610 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:14.610 Test: test_nvme_fabric_discover_probe ...passed 00:07:14.610 Test: test_nvme_fabric_qpair_connect ...[2024-07-11 16:22:51.391382] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:14.610 passed 00:07:14.610 00:07:14.610 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.610 suites 1 1 n/a 0 0 00:07:14.610 tests 5 5 5 0 0 00:07:14.610 asserts 60 60 60 0 n/a 00:07:14.610 00:07:14.610 Elapsed time = 0.001 seconds 00:07:14.610 16:22:51 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:14.869 00:07:14.869 00:07:14.869 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.869 http://cunit.sourceforge.net/ 00:07:14.869 00:07:14.869 00:07:14.869 Suite: nvme_opal 00:07:14.869 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:14.869 Test: test_opal_add_short_atom_header ...[2024-07-11 16:22:51.423011] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:14.869 passed 00:07:14.869 00:07:14.869 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.869 suites 1 1 n/a 0 0 00:07:14.869 tests 2 2 2 0 0 00:07:14.869 asserts 22 22 22 0 n/a 00:07:14.869 00:07:14.869 Elapsed time = 0.000 seconds 00:07:14.869 00:07:14.869 real 0m1.265s 00:07:14.869 user 0m0.662s 00:07:14.869 sys 0m0.402s 00:07:14.869 ************************************ 00:07:14.869 END TEST unittest_nvme 00:07:14.869 ************************************ 00:07:14.869 16:22:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.869 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:14.869 16:22:51 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:14.869 16:22:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:14.869 16:22:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.869 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:07:14.869 ************************************ 00:07:14.869 START TEST unittest_log 00:07:14.869 ************************************ 00:07:14.869 16:22:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:14.869 00:07:14.869 00:07:14.869 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.869 http://cunit.sourceforge.net/ 00:07:14.869 00:07:14.869 00:07:14.869 Suite: log 00:07:14.869 Test: log_test ...[2024-07-11 16:22:51.498790] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:07:14.869 [2024-07-11 16:22:51.499169] log_ut.c: 55:log_test: *DEBUG*: log test 00:07:14.869 log dump test: 00:07:14.869 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:14.869 spdk dump test: 00:07:14.869 passed 00:07:14.869 Test: deprecation ...00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:14.869 spdk dump test: 00:07:14.869 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:14.869 00000010 65 20 63 68 61 72 73 e chars 00:07:15.806 passed 00:07:15.806 00:07:15.806 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.806 suites 1 1 n/a 0 0 00:07:15.806 tests 2 2 2 0 0 00:07:15.806 asserts 73 73 73 0 n/a 00:07:15.806 00:07:15.806 Elapsed time = 0.001 seconds 00:07:15.806 00:07:15.806 real 0m1.034s 00:07:15.806 user 0m0.025s 00:07:15.806 sys 0m0.008s 00:07:15.806 ************************************ 00:07:15.806 END TEST unittest_log 00:07:15.806 ************************************ 00:07:15.806 16:22:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.806 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:15.806 16:22:52 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:15.806 16:22:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:15.806 16:22:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.806 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:15.806 ************************************ 00:07:15.806 START TEST unittest_lvol 00:07:15.806 ************************************ 00:07:15.806 16:22:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:15.806 00:07:15.806 00:07:15.806 CUnit - A unit testing framework for C - Version 2.1-3 00:07:15.806 http://cunit.sourceforge.net/ 00:07:15.806 00:07:15.806 00:07:15.806 Suite: lvol 00:07:15.806 Test: lvs_init_unload_success ...[2024-07-11 16:22:52.582127] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:07:15.806 passed 00:07:15.806 Test: lvs_init_destroy_success ...[2024-07-11 16:22:52.583054] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:07:15.806 passed 00:07:15.806 Test: lvs_init_opts_success ...passed 00:07:15.806 Test: lvs_unload_lvs_is_null_fail ...[2024-07-11 16:22:52.583828] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:07:15.806 passed 00:07:15.806 Test: lvs_names ...[2024-07-11 16:22:52.584052] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:07:15.806 [2024-07-11 16:22:52.584297] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:07:15.806 [2024-07-11 16:22:52.584623] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:07:15.806 passed 00:07:15.806 Test: lvol_create_destroy_success ...passed 00:07:15.806 Test: lvol_create_fail ...[2024-07-11 16:22:52.585720] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:07:15.806 [2024-07-11 16:22:52.585957] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:07:15.806 passed 00:07:15.806 Test: lvol_destroy_fail ...[2024-07-11 16:22:52.586556] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:07:15.806 passed 00:07:15.806 Test: lvol_close ...[2024-07-11 16:22:52.587083] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:07:15.806 [2024-07-11 16:22:52.587243] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:07:15.806 passed 00:07:15.806 Test: lvol_resize ...passed 00:07:15.806 Test: lvol_set_read_only ...passed 00:07:15.806 Test: test_lvs_load ...[2024-07-11 16:22:52.588717] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:07:15.806 [2024-07-11 16:22:52.588890] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:07:15.806 passed 00:07:15.806 Test: lvols_load ...[2024-07-11 16:22:52.589463] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:15.806 [2024-07-11 16:22:52.589743] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:15.806 passed 00:07:15.806 Test: lvol_open ...passed 00:07:15.806 Test: lvol_snapshot ...passed 00:07:15.806 Test: lvol_snapshot_fail ...[2024-07-11 16:22:52.591146] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:07:15.806 passed 00:07:15.806 Test: lvol_clone ...passed 00:07:15.806 Test: lvol_clone_fail ...[2024-07-11 16:22:52.592244] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:07:15.806 passed 00:07:15.806 Test: lvol_iter_clones ...passed 00:07:15.806 Test: lvol_refcnt ...[2024-07-11 16:22:52.593304] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 9f1dba52-0b32-459d-947e-020dfc8199a3 because it is still open 00:07:15.806 passed 00:07:15.806 Test: lvol_names ...[2024-07-11 16:22:52.593806] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:15.806 [2024-07-11 16:22:52.594022] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:15.806 [2024-07-11 16:22:52.594331] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:07:15.806 passed 00:07:15.806 Test: lvol_create_thin_provisioned ...passed 00:07:15.806 Test: lvol_rename ...[2024-07-11 16:22:52.595321] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:15.806 [2024-07-11 16:22:52.595560] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:07:15.806 passed 00:07:15.806 Test: lvs_rename ...[2024-07-11 16:22:52.596099] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:07:15.806 passed 00:07:15.806 Test: lvol_inflate ...[2024-07-11 16:22:52.596621] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:15.806 passed 00:07:15.806 Test: lvol_decouple_parent ...[2024-07-11 16:22:52.597169] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:15.806 passed 00:07:15.806 Test: lvol_get_xattr ...passed 00:07:15.806 Test: lvol_esnap_reload ...passed 00:07:15.806 Test: lvol_esnap_create_bad_args ...[2024-07-11 16:22:52.598312] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:07:15.806 [2024-07-11 16:22:52.598442] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:15.806 [2024-07-11 16:22:52.598590] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:07:15.806 [2024-07-11 16:22:52.598814] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:15.806 [2024-07-11 16:22:52.599073] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:07:15.806 passed 00:07:15.806 Test: lvol_esnap_create_delete ...passed 00:07:15.806 Test: lvol_esnap_load_esnaps ...[2024-07-11 16:22:52.599860] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:07:15.806 passed 00:07:15.806 Test: lvol_esnap_missing ...[2024-07-11 16:22:52.600288] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:15.806 [2024-07-11 16:22:52.600459] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:15.806 passed 00:07:15.806 Test: lvol_esnap_hotplug ... 00:07:15.806 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:07:15.806 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:07:15.806 [2024-07-11 16:22:52.601928] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 1fee62a4-f469-4fc6-9df4-dfe77501e8de: failed to create esnap bs_dev: error -12 00:07:15.806 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:07:15.806 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:07:15.806 [2024-07-11 16:22:52.602480] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol acba09e3-5c2c-4cad-befd-9cedf691bd56: failed to create esnap bs_dev: error -12 00:07:15.806 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:07:15.806 [2024-07-11 16:22:52.602831] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol b218b2fa-be9c-4d01-999e-7b4c0ae30ff3: failed to create esnap bs_dev: error -12 00:07:15.806 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:07:15.806 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:07:15.806 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:07:15.806 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:07:15.806 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:07:15.806 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:07:15.806 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:07:15.806 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:07:15.806 passed 00:07:15.806 Test: lvol_get_by ...passed 00:07:15.806 00:07:15.806 Run Summary: Type Total Ran Passed Failed Inactive 00:07:15.806 suites 1 1 n/a 0 0 00:07:15.806 tests 34 34 34 0 0 00:07:15.806 asserts 1439 1439 1439 0 n/a 00:07:15.806 00:07:15.806 Elapsed time = 0.014 seconds 00:07:16.066 ************************************ 00:07:16.066 END TEST unittest_lvol 00:07:16.066 ************************************ 00:07:16.066 00:07:16.066 real 0m0.060s 00:07:16.066 user 0m0.016s 00:07:16.066 sys 0m0.035s 00:07:16.066 16:22:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.066 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:16.066 16:22:52 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:16.066 16:22:52 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:16.066 16:22:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.066 16:22:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.066 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:16.066 ************************************ 00:07:16.066 START TEST unittest_nvme_rdma 00:07:16.066 ************************************ 00:07:16.066 16:22:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:16.066 00:07:16.066 00:07:16.066 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.066 http://cunit.sourceforge.net/ 00:07:16.066 00:07:16.066 00:07:16.066 Suite: nvme_rdma 00:07:16.066 Test: test_nvme_rdma_build_sgl_request ...[2024-07-11 16:22:52.691808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:07:16.066 [2024-07-11 16:22:52.692366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:16.066 [2024-07-11 16:22:52.692575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:07:16.066 passed 00:07:16.066 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:07:16.066 Test: test_nvme_rdma_build_contig_request ...[2024-07-11 16:22:52.693146] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:16.066 passed 00:07:16.066 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:07:16.066 Test: test_nvme_rdma_create_reqs ...[2024-07-11 16:22:52.693637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:07:16.066 passed 00:07:16.066 Test: test_nvme_rdma_create_rsps ...[2024-07-11 16:22:52.694243] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:07:16.066 passed 00:07:16.066 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-11 16:22:52.694713] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:16.066 [2024-07-11 16:22:52.694871] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:16.066 passed 00:07:16.066 Test: test_nvme_rdma_poller_create ...passed 00:07:16.066 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-11 16:22:52.695383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:07:16.066 passed 00:07:16.066 Test: test_nvme_rdma_ctrlr_construct ...passed 00:07:16.066 Test: test_nvme_rdma_req_put_and_get ...passed 00:07:16.066 Test: test_nvme_rdma_req_init ...passed 00:07:16.066 Test: test_nvme_rdma_validate_cm_event ...[2024-07-11 16:22:52.696372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:07:16.066 [2024-07-11 16:22:52.696525] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:07:16.066 passed 00:07:16.066 Test: test_nvme_rdma_qpair_init ...passed 00:07:16.066 Test: test_nvme_rdma_qpair_submit_request ...passed 00:07:16.066 Test: test_nvme_rdma_memory_domain ...[2024-07-11 16:22:52.697399] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:07:16.066 passed 00:07:16.066 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:07:16.066 Test: test_rdma_get_memory_translation ...[2024-07-11 16:22:52.697839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:07:16.066 [2024-07-11 16:22:52.697997] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:07:16.066 passed 00:07:16.066 Test: test_get_rdma_qpair_from_wc ...passed 00:07:16.066 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:07:16.066 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-11 16:22:52.698610] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:16.066 [2024-07-11 16:22:52.698750] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:16.066 passed 00:07:16.066 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-11 16:22:52.699156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:16.066 [2024-07-11 16:22:52.699299] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:07:16.066 [2024-07-11 16:22:52.699443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fffa5255080 on poll group 0x60b0000001a0 00:07:16.066 [2024-07-11 16:22:52.699611] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:16.066 [2024-07-11 16:22:52.699753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:07:16.066 [2024-07-11 16:22:52.699893] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fffa5255080 on poll group 0x60b0000001a0 00:07:16.066 [2024-07-11 16:22:52.700063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:16.066 passed 00:07:16.066 00:07:16.066 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.066 suites 1 1 n/a 0 0 00:07:16.066 tests 22 22 22 0 0 00:07:16.066 asserts 412 412 412 0 n/a 00:07:16.066 00:07:16.066 Elapsed time = 0.004 seconds 00:07:16.066 00:07:16.066 real 0m0.042s 00:07:16.066 user 0m0.015s 00:07:16.066 sys 0m0.022s 00:07:16.066 16:22:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.066 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:16.066 ************************************ 00:07:16.066 END TEST unittest_nvme_rdma 00:07:16.066 ************************************ 00:07:16.066 16:22:52 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:16.066 16:22:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.066 16:22:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.066 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:16.066 ************************************ 00:07:16.066 START TEST unittest_nvmf_transport 00:07:16.066 ************************************ 00:07:16.066 16:22:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:16.066 00:07:16.066 00:07:16.066 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.066 http://cunit.sourceforge.net/ 00:07:16.066 00:07:16.066 00:07:16.066 Suite: nvmf 00:07:16.066 Test: test_spdk_nvmf_transport_create ...[2024-07-11 16:22:52.790049] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:07:16.066 [2024-07-11 16:22:52.790402] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:07:16.066 [2024-07-11 16:22:52.790464] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:07:16.066 [2024-07-11 16:22:52.790576] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:07:16.066 passed 00:07:16.066 Test: test_nvmf_transport_poll_group_create ...passed 00:07:16.066 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-11 16:22:52.790808] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:07:16.066 [2024-07-11 16:22:52.790896] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:07:16.066 [2024-07-11 16:22:52.790919] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:07:16.066 passed 00:07:16.066 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:07:16.066 00:07:16.066 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.066 suites 1 1 n/a 0 0 00:07:16.066 tests 4 4 4 0 0 00:07:16.066 asserts 49 49 49 0 n/a 00:07:16.066 00:07:16.066 Elapsed time = 0.001 seconds 00:07:16.066 00:07:16.066 real 0m0.043s 00:07:16.066 user 0m0.031s 00:07:16.066 sys 0m0.012s 00:07:16.066 16:22:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.066 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:16.066 ************************************ 00:07:16.066 END TEST unittest_nvmf_transport 00:07:16.066 ************************************ 00:07:16.066 16:22:52 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:16.066 16:22:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.066 16:22:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.066 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:16.066 ************************************ 00:07:16.066 START TEST unittest_rdma 00:07:16.066 ************************************ 00:07:16.066 16:22:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:16.326 00:07:16.326 00:07:16.326 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.326 http://cunit.sourceforge.net/ 00:07:16.326 00:07:16.326 00:07:16.326 Suite: rdma_common 00:07:16.326 Test: test_spdk_rdma_pd ...[2024-07-11 16:22:52.878659] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:16.326 [2024-07-11 16:22:52.879228] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:07:16.326 passed 00:07:16.326 00:07:16.326 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.326 suites 1 1 n/a 0 0 00:07:16.326 tests 1 1 1 0 0 00:07:16.326 asserts 31 31 31 0 n/a 00:07:16.326 00:07:16.326 Elapsed time = 0.001 seconds 00:07:16.326 00:07:16.326 real 0m0.027s 00:07:16.326 user 0m0.012s 00:07:16.326 sys 0m0.015s 00:07:16.326 16:22:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.326 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:16.326 ************************************ 00:07:16.326 END TEST unittest_rdma 00:07:16.326 ************************************ 00:07:16.326 16:22:52 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:16.326 16:22:52 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:16.326 16:22:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.326 16:22:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.326 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:16.326 ************************************ 00:07:16.326 START TEST unittest_nvme_cuse 00:07:16.326 ************************************ 00:07:16.326 16:22:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:16.326 00:07:16.326 00:07:16.326 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.326 http://cunit.sourceforge.net/ 00:07:16.326 00:07:16.326 00:07:16.326 Suite: nvme_cuse 00:07:16.326 Test: test_cuse_nvme_submit_io_read_write ...passed 00:07:16.326 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:07:16.326 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:07:16.326 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:07:16.326 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:07:16.326 Test: test_cuse_nvme_submit_io ...[2024-07-11 16:22:52.961031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:07:16.326 passed 00:07:16.326 Test: test_cuse_nvme_reset ...[2024-07-11 16:22:52.961629] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:07:16.326 passed 00:07:16.326 Test: test_nvme_cuse_stop ...passed 00:07:16.326 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:07:16.326 00:07:16.326 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.326 suites 1 1 n/a 0 0 00:07:16.326 tests 9 9 9 0 0 00:07:16.326 asserts 121 121 121 0 n/a 00:07:16.326 00:07:16.326 Elapsed time = 0.002 seconds 00:07:16.326 00:07:16.326 real 0m0.033s 00:07:16.326 user 0m0.027s 00:07:16.326 sys 0m0.005s 00:07:16.326 16:22:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.326 16:22:52 -- common/autotest_common.sh@10 -- # set +x 00:07:16.326 ************************************ 00:07:16.326 END TEST unittest_nvme_cuse 00:07:16.326 ************************************ 00:07:16.326 16:22:53 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:07:16.326 16:22:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.326 16:22:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.326 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.326 ************************************ 00:07:16.326 START TEST unittest_nvmf 00:07:16.326 ************************************ 00:07:16.326 16:22:53 -- common/autotest_common.sh@1104 -- # unittest_nvmf 00:07:16.326 16:22:53 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:07:16.326 00:07:16.326 00:07:16.326 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.326 http://cunit.sourceforge.net/ 00:07:16.326 00:07:16.326 00:07:16.326 Suite: nvmf 00:07:16.327 Test: test_get_log_page ...[2024-07-11 16:22:53.042470] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:16.327 passed 00:07:16.327 Test: test_process_fabrics_cmd ...passed 00:07:16.327 Test: test_connect ...[2024-07-11 16:22:53.043289] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:07:16.327 [2024-07-11 16:22:53.043416] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:07:16.327 [2024-07-11 16:22:53.043466] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:07:16.327 [2024-07-11 16:22:53.043494] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:07:16.327 [2024-07-11 16:22:53.043629] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:07:16.327 [2024-07-11 16:22:53.043657] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:07:16.327 [2024-07-11 16:22:53.043749] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:07:16.327 [2024-07-11 16:22:53.043779] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:07:16.327 [2024-07-11 16:22:53.043887] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:07:16.327 [2024-07-11 16:22:53.043955] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:07:16.327 [2024-07-11 16:22:53.044193] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:07:16.327 [2024-07-11 16:22:53.044272] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:07:16.327 [2024-07-11 16:22:53.044393] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:07:16.327 [2024-07-11 16:22:53.044477] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:07:16.327 [2024-07-11 16:22:53.044575] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:07:16.327 [2024-07-11 16:22:53.044704] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:07:16.327 passed 00:07:16.327 Test: test_get_ns_id_desc_list ...passed 00:07:16.327 Test: test_identify_ns ...[2024-07-11 16:22:53.044917] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:16.327 [2024-07-11 16:22:53.045159] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:07:16.327 [2024-07-11 16:22:53.045303] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:07:16.327 passed 00:07:16.327 Test: test_identify_ns_iocs_specific ...[2024-07-11 16:22:53.045443] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:16.327 [2024-07-11 16:22:53.045710] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:16.327 passed 00:07:16.327 Test: test_reservation_write_exclusive ...passed 00:07:16.327 Test: test_reservation_exclusive_access ...passed 00:07:16.327 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:07:16.327 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:07:16.327 Test: test_reservation_notification_log_page ...passed 00:07:16.327 Test: test_get_dif_ctx ...passed 00:07:16.327 Test: test_set_get_features ...[2024-07-11 16:22:53.046222] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:16.327 [2024-07-11 16:22:53.046267] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:16.327 [2024-07-11 16:22:53.046315] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:07:16.327 passed 00:07:16.327 Test: test_identify_ctrlr ...passed 00:07:16.327 Test: test_identify_ctrlr_iocs_specific ...[2024-07-11 16:22:53.046360] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:07:16.327 passed 00:07:16.327 Test: test_custom_admin_cmd ...passed 00:07:16.327 Test: test_fused_compare_and_write ...[2024-07-11 16:22:53.046794] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:07:16.327 [2024-07-11 16:22:53.046836] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:16.327 [2024-07-11 16:22:53.046872] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:16.327 passed 00:07:16.327 Test: test_multi_async_event_reqs ...passed 00:07:16.327 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:07:16.327 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:07:16.327 Test: test_multi_async_events ...passed 00:07:16.327 Test: test_rae ...passed 00:07:16.327 Test: test_nvmf_ctrlr_create_destruct ...passed 00:07:16.327 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:07:16.327 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-11 16:22:53.047326] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:07:16.327 passed 00:07:16.327 Test: test_zcopy_read ...passed 00:07:16.327 Test: test_zcopy_write ...passed 00:07:16.327 Test: test_nvmf_property_set ...passed 00:07:16.327 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-11 16:22:53.047487] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:16.327 passed 00:07:16.327 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-11 16:22:53.047559] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:16.327 [2024-07-11 16:22:53.047602] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:07:16.327 passed 00:07:16.327 00:07:16.327 [2024-07-11 16:22:53.047631] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:07:16.327 [2024-07-11 16:22:53.047654] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:16.327 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.327 suites 1 1 n/a 0 0 00:07:16.327 tests 30 30 30 0 0 00:07:16.327 asserts 885 885 885 0 n/a 00:07:16.327 00:07:16.327 Elapsed time = 0.005 seconds 00:07:16.327 16:22:53 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:07:16.327 00:07:16.327 00:07:16.327 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.327 http://cunit.sourceforge.net/ 00:07:16.327 00:07:16.327 00:07:16.327 Suite: nvmf 00:07:16.327 Test: test_get_rw_params ...passed 00:07:16.327 Test: test_lba_in_range ...passed 00:07:16.327 Test: test_get_dif_ctx ...passed 00:07:16.327 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:07:16.327 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-11 16:22:53.079171] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:07:16.327 [2024-07-11 16:22:53.079536] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:07:16.327 [2024-07-11 16:22:53.079662] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:07:16.327 passed 00:07:16.327 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-11 16:22:53.079730] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:07:16.327 [2024-07-11 16:22:53.079819] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:07:16.327 passed 00:07:16.327 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-11 16:22:53.079931] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:07:16.327 passed 00:07:16.327 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:07:16.327 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:07:16.327 00:07:16.327 [2024-07-11 16:22:53.079967] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:07:16.327 [2024-07-11 16:22:53.080030] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:07:16.327 [2024-07-11 16:22:53.080081] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:07:16.327 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.327 suites 1 1 n/a 0 0 00:07:16.327 tests 9 9 9 0 0 00:07:16.327 asserts 157 157 157 0 n/a 00:07:16.327 00:07:16.327 Elapsed time = 0.001 seconds 00:07:16.327 16:22:53 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:07:16.327 00:07:16.327 00:07:16.327 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.327 http://cunit.sourceforge.net/ 00:07:16.327 00:07:16.327 00:07:16.327 Suite: nvmf 00:07:16.327 Test: test_discovery_log ...passed 00:07:16.327 Test: test_discovery_log_with_filters ...passed 00:07:16.327 00:07:16.327 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.327 suites 1 1 n/a 0 0 00:07:16.327 tests 2 2 2 0 0 00:07:16.327 asserts 238 238 238 0 n/a 00:07:16.327 00:07:16.327 Elapsed time = 0.003 seconds 00:07:16.586 16:22:53 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:07:16.586 00:07:16.586 00:07:16.586 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.586 http://cunit.sourceforge.net/ 00:07:16.586 00:07:16.586 00:07:16.586 Suite: nvmf 00:07:16.586 Test: nvmf_test_create_subsystem ...[2024-07-11 16:22:53.157379] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:07:16.586 [2024-07-11 16:22:53.157755] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:07:16.586 [2024-07-11 16:22:53.157845] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:07:16.586 [2024-07-11 16:22:53.157879] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:07:16.586 [2024-07-11 16:22:53.157903] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:07:16.586 [2024-07-11 16:22:53.157939] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:07:16.586 [2024-07-11 16:22:53.158042] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:07:16.586 [2024-07-11 16:22:53.158208] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:07:16.586 [2024-07-11 16:22:53.158308] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:07:16.586 [2024-07-11 16:22:53.158345] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:16.586 [2024-07-11 16:22:53.158369] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:16.586 passed 00:07:16.586 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-11 16:22:53.158513] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:07:16.586 [2024-07-11 16:22:53.158607] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:07:16.586 passed 00:07:16.586 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:07:16.586 Test: test_reservation_register ...[2024-07-11 16:22:53.158848] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.586 [2024-07-11 16:22:53.158949] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:07:16.586 passed 00:07:16.586 Test: test_reservation_register_with_ptpl ...passed 00:07:16.586 Test: test_reservation_acquire_preempt_1 ...[2024-07-11 16:22:53.159832] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.586 passed 00:07:16.586 Test: test_reservation_acquire_release_with_ptpl ...passed 00:07:16.586 Test: test_reservation_release ...[2024-07-11 16:22:53.161545] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.586 passed 00:07:16.587 Test: test_reservation_unregister_notification ...[2024-07-11 16:22:53.161791] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.587 passed 00:07:16.587 Test: test_reservation_release_notification ...[2024-07-11 16:22:53.162063] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.587 passed 00:07:16.587 Test: test_reservation_release_notification_write_exclusive ...[2024-07-11 16:22:53.162281] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.587 passed 00:07:16.587 Test: test_reservation_clear_notification ...[2024-07-11 16:22:53.162499] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.587 passed 00:07:16.587 Test: test_reservation_preempt_notification ...[2024-07-11 16:22:53.162715] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:16.587 passed 00:07:16.587 Test: test_spdk_nvmf_ns_event ...passed 00:07:16.587 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:07:16.587 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:07:16.587 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-11 16:22:53.163430] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:07:16.587 [2024-07-11 16:22:53.163527] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:07:16.587 passed 00:07:16.587 Test: test_nvmf_ns_reservation_report ...passed 00:07:16.587 Test: test_nvmf_nqn_is_valid ...[2024-07-11 16:22:53.163653] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:07:16.587 [2024-07-11 16:22:53.163732] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:07:16.587 [2024-07-11 16:22:53.163772] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:f38b323f-30b4-4450-94da-06bb591704a": uuid is not the correct length 00:07:16.587 passed 00:07:16.587 Test: test_nvmf_ns_reservation_restore ...[2024-07-11 16:22:53.163800] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:07:16.587 [2024-07-11 16:22:53.163911] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:07:16.587 passed 00:07:16.587 Test: test_nvmf_subsystem_state_change ...passed 00:07:16.587 Test: test_nvmf_reservation_custom_ops ...passed 00:07:16.587 00:07:16.587 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.587 suites 1 1 n/a 0 0 00:07:16.587 tests 22 22 22 0 0 00:07:16.587 asserts 407 407 407 0 n/a 00:07:16.587 00:07:16.587 Elapsed time = 0.008 seconds 00:07:16.587 16:22:53 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:07:16.587 00:07:16.587 00:07:16.587 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.587 http://cunit.sourceforge.net/ 00:07:16.587 00:07:16.587 00:07:16.587 Suite: nvmf 00:07:16.587 Test: test_nvmf_tcp_create ...[2024-07-11 16:22:53.220048] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:07:16.587 passed 00:07:16.587 Test: test_nvmf_tcp_destroy ...passed 00:07:16.587 Test: test_nvmf_tcp_poll_group_create ...passed 00:07:16.587 Test: test_nvmf_tcp_send_c2h_data ...passed 00:07:16.587 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:07:16.587 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:07:16.587 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:07:16.587 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-11 16:22:53.317384] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.317463] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760830 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.317591] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760830 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.317674] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.317698] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760830 is same with the state(5) to be set 00:07:16.587 passed 00:07:16.587 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:07:16.587 Test: test_nvmf_tcp_icreq_handle ...[2024-07-11 16:22:53.317783] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:16.587 [2024-07-11 16:22:53.317864] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.317929] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760830 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.317957] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:16.587 [2024-07-11 16:22:53.317987] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760830 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.318009] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.318039] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760830 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.318068] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.318114] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760830 is same with the state(5) to be set 00:07:16.587 passed 00:07:16.587 Test: test_nvmf_tcp_check_xfer_type ...passed 00:07:16.587 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-11 16:22:53.318173] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:07:16.587 [2024-07-11 16:22:53.318209] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.318231] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760830 is same with the state(5) to be set 00:07:16.587 passed 00:07:16.587 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-11 16:22:53.318267] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffdb9761590 00:07:16.587 [2024-07-11 16:22:53.318342] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.318399] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760cf0 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.318440] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffdb9760cf0 00:07:16.587 [2024-07-11 16:22:53.318464] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.318493] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760cf0 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.318522] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:07:16.587 [2024-07-11 16:22:53.318562] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.318600] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760cf0 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.318632] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:07:16.587 [2024-07-11 16:22:53.318657] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.318684] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760cf0 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.318711] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.318739] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760cf0 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.318786] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.318810] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760cf0 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.318860] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.318881] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760cf0 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.318914] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.318935] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760cf0 is same with the state(5) to be set 00:07:16.587 [2024-07-11 16:22:53.318978] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.319000] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760cf0 is same with the state(5) to be set 00:07:16.587 passed 00:07:16.587 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-11 16:22:53.319034] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:16.587 [2024-07-11 16:22:53.319059] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffdb9760cf0 is same with the state(5) to be set 00:07:16.587 passed 00:07:16.587 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-11 16:22:53.337164] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:07:16.587 [2024-07-11 16:22:53.337230] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:07:16.587 passed 00:07:16.587 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-11 16:22:53.337446] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:07:16.587 [2024-07-11 16:22:53.337480] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:07:16.587 passed 00:07:16.587 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed[2024-07-11 16:22:53.337614] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:07:16.587 [2024-07-11 16:22:53.337641] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:07:16.587 00:07:16.587 00:07:16.587 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.587 suites 1 1 n/a 0 0 00:07:16.587 tests 17 17 17 0 0 00:07:16.587 asserts 222 222 222 0 n/a 00:07:16.587 00:07:16.587 Elapsed time = 0.140 seconds 00:07:16.846 16:22:53 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:07:16.846 00:07:16.846 00:07:16.846 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.846 http://cunit.sourceforge.net/ 00:07:16.846 00:07:16.846 00:07:16.846 Suite: nvmf 00:07:16.846 Test: test_nvmf_tgt_create_poll_group ...passed 00:07:16.846 00:07:16.846 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.846 suites 1 1 n/a 0 0 00:07:16.846 tests 1 1 1 0 0 00:07:16.846 asserts 17 17 17 0 n/a 00:07:16.846 00:07:16.846 Elapsed time = 0.022 seconds 00:07:16.846 00:07:16.846 real 0m0.450s 00:07:16.846 user 0m0.218s 00:07:16.846 sys 0m0.235s 00:07:16.846 16:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.846 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.846 ************************************ 00:07:16.846 END TEST unittest_nvmf 00:07:16.846 ************************************ 00:07:16.846 16:22:53 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:16.846 16:22:53 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:16.846 16:22:53 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:16.846 16:22:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.846 16:22:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.846 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.846 ************************************ 00:07:16.846 START TEST unittest_nvmf_rdma 00:07:16.846 ************************************ 00:07:16.846 16:22:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:16.846 00:07:16.846 00:07:16.846 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.846 http://cunit.sourceforge.net/ 00:07:16.846 00:07:16.846 00:07:16.846 Suite: nvmf 00:07:16.846 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-11 16:22:53.550309] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:07:16.846 [2024-07-11 16:22:53.550640] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:07:16.846 [2024-07-11 16:22:53.550691] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1964:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:07:16.846 passed 00:07:16.846 Test: test_spdk_nvmf_rdma_request_process ...passed 00:07:16.846 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:07:16.846 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:07:16.846 Test: test_nvmf_rdma_opts_init ...passed 00:07:16.846 Test: test_nvmf_rdma_request_free_data ...passed 00:07:16.846 Test: test_nvmf_rdma_update_ibv_state ...passed 00:07:16.846 Test: test_nvmf_rdma_resources_create ...[2024-07-11 16:22:53.551880] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:07:16.846 [2024-07-11 16:22:53.551927] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:07:16.846 passed 00:07:16.846 Test: test_nvmf_rdma_qpair_compare ...passed 00:07:16.846 Test: test_nvmf_rdma_resize_cq ...[2024-07-11 16:22:53.553197] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:07:16.846 Using CQ of insufficient size may lead to CQ overrun 00:07:16.846 passed 00:07:16.846 00:07:16.846 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.846 suites 1 1 n/a 0 0 00:07:16.846 tests 10 10 10 0 0 00:07:16.846 asserts 584 584 584 0 n/a 00:07:16.846 00:07:16.846 Elapsed time = 0.003 seconds 00:07:16.846 [2024-07-11 16:22:53.553306] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:07:16.846 [2024-07-11 16:22:53.553367] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:16.846 00:07:16.846 real 0m0.045s 00:07:16.846 user 0m0.033s 00:07:16.846 sys 0m0.012s 00:07:16.846 16:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.846 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.846 ************************************ 00:07:16.846 END TEST unittest_nvmf_rdma 00:07:16.846 ************************************ 00:07:16.846 16:22:53 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:16.846 16:22:53 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:07:16.846 16:22:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:16.846 16:22:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.846 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:16.846 ************************************ 00:07:16.846 START TEST unittest_scsi 00:07:16.846 ************************************ 00:07:16.846 16:22:53 -- common/autotest_common.sh@1104 -- # unittest_scsi 00:07:16.846 16:22:53 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:07:16.846 00:07:16.846 00:07:16.846 CUnit - A unit testing framework for C - Version 2.1-3 00:07:16.846 http://cunit.sourceforge.net/ 00:07:16.846 00:07:16.846 00:07:16.846 Suite: dev_suite 00:07:16.846 Test: dev_destruct_null_dev ...passed 00:07:16.846 Test: dev_destruct_zero_luns ...passed 00:07:16.846 Test: dev_destruct_null_lun ...passed 00:07:16.846 Test: dev_destruct_success ...passed 00:07:16.846 Test: dev_construct_num_luns_zero ...[2024-07-11 16:22:53.644528] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:07:16.846 passed 00:07:16.846 Test: dev_construct_no_lun_zero ...[2024-07-11 16:22:53.645151] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:07:16.846 passed 00:07:16.846 Test: dev_construct_null_lun ...[2024-07-11 16:22:53.645514] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:07:16.846 passed 00:07:16.846 Test: dev_construct_name_too_long ...[2024-07-11 16:22:53.645700] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:07:16.846 passed 00:07:16.846 Test: dev_construct_success ...passed 00:07:16.846 Test: dev_construct_success_lun_zero_not_first ...passed 00:07:16.846 Test: dev_queue_mgmt_task_success ...passed 00:07:16.846 Test: dev_queue_task_success ...passed 00:07:16.846 Test: dev_stop_success ...passed 00:07:16.846 Test: dev_add_port_max_ports ...[2024-07-11 16:22:53.646994] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:07:16.846 passed 00:07:16.846 Test: dev_add_port_construct_failure1 ...[2024-07-11 16:22:53.647371] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:07:16.846 passed 00:07:16.846 Test: dev_add_port_construct_failure2 ...[2024-07-11 16:22:53.647705] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:07:16.846 passed 00:07:16.846 Test: dev_add_port_success1 ...passed 00:07:16.846 Test: dev_add_port_success2 ...passed 00:07:16.846 Test: dev_add_port_success3 ...passed 00:07:16.846 Test: dev_find_port_by_id_num_ports_zero ...passed 00:07:16.846 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:07:16.846 Test: dev_find_port_by_id_success ...passed 00:07:16.846 Test: dev_add_lun_bdev_not_found ...passed 00:07:16.846 Test: dev_add_lun_no_free_lun_id ...[2024-07-11 16:22:53.649272] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:07:16.846 passed 00:07:16.846 Test: dev_add_lun_success1 ...passed 00:07:16.846 Test: dev_add_lun_success2 ...passed 00:07:16.846 Test: dev_check_pending_tasks ...passed 00:07:16.846 Test: dev_iterate_luns ...passed 00:07:16.846 Test: dev_find_free_lun ...passed 00:07:16.846 00:07:16.846 Run Summary: Type Total Ran Passed Failed Inactive 00:07:16.846 suites 1 1 n/a 0 0 00:07:16.846 tests 29 29 29 0 0 00:07:16.846 asserts 97 97 97 0 n/a 00:07:16.846 00:07:16.846 Elapsed time = 0.003 seconds 00:07:17.105 16:22:53 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:07:17.105 00:07:17.105 00:07:17.105 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.105 http://cunit.sourceforge.net/ 00:07:17.105 00:07:17.105 00:07:17.105 Suite: lun_suite 00:07:17.105 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-11 16:22:53.685929] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:07:17.105 passed 00:07:17.105 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:07:17.105 Test: lun_task_mgmt_execute_lun_reset ...[2024-07-11 16:22:53.686358] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:07:17.105 passed 00:07:17.105 Test: lun_task_mgmt_execute_target_reset ...passed 00:07:17.105 Test: lun_task_mgmt_execute_invalid_case ...passed 00:07:17.105 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-07-11 16:22:53.686524] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:07:17.105 passed 00:07:17.105 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:07:17.105 Test: lun_append_task_null_lun_not_supported ...passed 00:07:17.105 Test: lun_execute_scsi_task_pending ...passed 00:07:17.105 Test: lun_execute_scsi_task_complete ...passed 00:07:17.105 Test: lun_execute_scsi_task_resize ...passed 00:07:17.105 Test: lun_destruct_success ...passed 00:07:17.106 Test: lun_construct_null_ctx ...passed 00:07:17.106 Test: lun_construct_success ...passed 00:07:17.106 Test: lun_reset_task_wait_scsi_task_complete ...[2024-07-11 16:22:53.686752] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:07:17.106 passed 00:07:17.106 Test: lun_reset_task_suspend_scsi_task ...passed 00:07:17.106 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:07:17.106 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:07:17.106 00:07:17.106 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.106 suites 1 1 n/a 0 0 00:07:17.106 tests 18 18 18 0 0 00:07:17.106 asserts 153 153 153 0 n/a 00:07:17.106 00:07:17.106 Elapsed time = 0.001 seconds 00:07:17.106 16:22:53 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:07:17.106 00:07:17.106 00:07:17.106 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.106 http://cunit.sourceforge.net/ 00:07:17.106 00:07:17.106 00:07:17.106 Suite: scsi_suite 00:07:17.106 Test: scsi_init ...passed 00:07:17.106 00:07:17.106 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.106 suites 1 1 n/a 0 0 00:07:17.106 tests 1 1 1 0 0 00:07:17.106 asserts 1 1 1 0 n/a 00:07:17.106 00:07:17.106 Elapsed time = 0.000 seconds 00:07:17.106 16:22:53 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:07:17.106 00:07:17.106 00:07:17.106 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.106 http://cunit.sourceforge.net/ 00:07:17.106 00:07:17.106 00:07:17.106 Suite: translation_suite 00:07:17.106 Test: mode_select_6_test ...passed 00:07:17.106 Test: mode_select_6_test2 ...passed 00:07:17.106 Test: mode_sense_6_test ...passed 00:07:17.106 Test: mode_sense_10_test ...passed 00:07:17.106 Test: inquiry_evpd_test ...passed 00:07:17.106 Test: inquiry_standard_test ...passed 00:07:17.106 Test: inquiry_overflow_test ...passed 00:07:17.106 Test: task_complete_test ...passed 00:07:17.106 Test: lba_range_test ...passed 00:07:17.106 Test: xfer_len_test ...[2024-07-11 16:22:53.742631] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:07:17.106 passed 00:07:17.106 Test: xfer_test ...passed 00:07:17.106 Test: scsi_name_padding_test ...passed 00:07:17.106 Test: get_dif_ctx_test ...passed 00:07:17.106 Test: unmap_split_test ...passed 00:07:17.106 00:07:17.106 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.106 suites 1 1 n/a 0 0 00:07:17.106 tests 14 14 14 0 0 00:07:17.106 asserts 1200 1200 1200 0 n/a 00:07:17.106 00:07:17.106 Elapsed time = 0.004 seconds 00:07:17.106 16:22:53 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:07:17.106 00:07:17.106 00:07:17.106 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.106 http://cunit.sourceforge.net/ 00:07:17.106 00:07:17.106 00:07:17.106 Suite: reservation_suite 00:07:17.106 Test: test_reservation_register ...[2024-07-11 16:22:53.768282] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.106 passed 00:07:17.106 Test: test_reservation_reserve ...[2024-07-11 16:22:53.768816] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.106 [2024-07-11 16:22:53.768906] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:07:17.106 passed 00:07:17.106 Test: test_reservation_preempt_non_all_regs ...[2024-07-11 16:22:53.769035] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:07:17.106 [2024-07-11 16:22:53.769117] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.106 [2024-07-11 16:22:53.769211] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:07:17.106 passed 00:07:17.106 Test: test_reservation_preempt_all_regs ...[2024-07-11 16:22:53.769372] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.106 passed 00:07:17.106 Test: test_reservation_cmds_conflict ...[2024-07-11 16:22:53.769516] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.106 [2024-07-11 16:22:53.769602] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:07:17.106 [2024-07-11 16:22:53.769658] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:17.106 [2024-07-11 16:22:53.769686] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:17.106 [2024-07-11 16:22:53.769726] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:17.106 [2024-07-11 16:22:53.769753] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:17.106 passed 00:07:17.106 Test: test_scsi2_reserve_release ...passed 00:07:17.106 Test: test_pr_with_scsi2_reserve_release ...[2024-07-11 16:22:53.769865] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:17.106 passed 00:07:17.106 00:07:17.106 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.106 suites 1 1 n/a 0 0 00:07:17.106 tests 7 7 7 0 0 00:07:17.106 asserts 257 257 257 0 n/a 00:07:17.106 00:07:17.106 Elapsed time = 0.002 seconds 00:07:17.106 00:07:17.106 real 0m0.158s 00:07:17.106 user 0m0.084s 00:07:17.106 sys 0m0.073s 00:07:17.106 16:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.106 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:17.106 ************************************ 00:07:17.106 END TEST unittest_scsi 00:07:17.106 ************************************ 00:07:17.106 16:22:53 -- unit/unittest.sh@276 -- # uname -s 00:07:17.106 16:22:53 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:07:17.106 16:22:53 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:07:17.106 16:22:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.106 16:22:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.106 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:17.106 ************************************ 00:07:17.106 START TEST unittest_sock 00:07:17.106 ************************************ 00:07:17.106 16:22:53 -- common/autotest_common.sh@1104 -- # unittest_sock 00:07:17.106 16:22:53 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:07:17.106 00:07:17.106 00:07:17.106 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.106 http://cunit.sourceforge.net/ 00:07:17.106 00:07:17.106 00:07:17.106 Suite: sock 00:07:17.106 Test: posix_sock ...passed 00:07:17.106 Test: ut_sock ...passed 00:07:17.106 Test: posix_sock_group ...passed 00:07:17.106 Test: ut_sock_group ...passed 00:07:17.106 Test: posix_sock_group_fairness ...passed 00:07:17.106 Test: _posix_sock_close ...passed 00:07:17.106 Test: sock_get_default_opts ...passed 00:07:17.106 Test: ut_sock_impl_get_set_opts ...passed 00:07:17.106 Test: posix_sock_impl_get_set_opts ...passed 00:07:17.106 Test: ut_sock_map ...passed 00:07:17.106 Test: override_impl_opts ...passed 00:07:17.106 Test: ut_sock_group_get_ctx ...passed 00:07:17.106 00:07:17.106 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.106 suites 1 1 n/a 0 0 00:07:17.106 tests 12 12 12 0 0 00:07:17.106 asserts 349 349 349 0 n/a 00:07:17.106 00:07:17.106 Elapsed time = 0.006 seconds 00:07:17.106 16:22:53 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:07:17.366 00:07:17.366 00:07:17.366 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.366 http://cunit.sourceforge.net/ 00:07:17.366 00:07:17.366 00:07:17.366 Suite: posix 00:07:17.366 Test: flush ...passed 00:07:17.366 00:07:17.366 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.366 suites 1 1 n/a 0 0 00:07:17.366 tests 1 1 1 0 0 00:07:17.366 asserts 28 28 28 0 n/a 00:07:17.366 00:07:17.366 Elapsed time = 0.000 seconds 00:07:17.366 16:22:53 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:17.366 00:07:17.366 real 0m0.095s 00:07:17.366 user 0m0.044s 00:07:17.366 sys 0m0.027s 00:07:17.366 16:22:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.366 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:17.366 ************************************ 00:07:17.366 END TEST unittest_sock 00:07:17.366 ************************************ 00:07:17.366 16:22:53 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:17.366 16:22:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.366 16:22:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.366 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:07:17.366 ************************************ 00:07:17.366 START TEST unittest_thread 00:07:17.366 ************************************ 00:07:17.366 16:22:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:17.366 00:07:17.366 00:07:17.366 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.366 http://cunit.sourceforge.net/ 00:07:17.366 00:07:17.366 00:07:17.366 Suite: io_channel 00:07:17.366 Test: thread_alloc ...passed 00:07:17.366 Test: thread_send_msg ...passed 00:07:17.366 Test: thread_poller ...passed 00:07:17.366 Test: poller_pause ...passed 00:07:17.366 Test: thread_for_each ...passed 00:07:17.366 Test: for_each_channel_remove ...passed 00:07:17.366 Test: for_each_channel_unreg ...[2024-07-11 16:22:54.020358] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffceb00e590 already registered (old:0x613000000200 new:0x6130000003c0) 00:07:17.366 passed 00:07:17.366 Test: thread_name ...passed 00:07:17.366 Test: channel ...[2024-07-11 16:22:54.024922] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x5652cc3cf0e0 00:07:17.366 passed 00:07:17.366 Test: channel_destroy_races ...passed 00:07:17.366 Test: thread_exit_test ...[2024-07-11 16:22:54.030470] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:07:17.366 passed 00:07:17.366 Test: thread_update_stats_test ...passed 00:07:17.366 Test: nested_channel ...passed 00:07:17.366 Test: device_unregister_and_thread_exit_race ...passed 00:07:17.366 Test: cache_closest_timed_poller ...passed 00:07:17.366 Test: multi_timed_pollers_have_same_expiration ...passed 00:07:17.366 Test: io_device_lookup ...passed 00:07:17.366 Test: spdk_spin ...[2024-07-11 16:22:54.042704] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:17.366 [2024-07-11 16:22:54.042849] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffceb00e580 00:07:17.366 [2024-07-11 16:22:54.043032] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:17.366 [2024-07-11 16:22:54.044820] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:17.366 [2024-07-11 16:22:54.045035] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffceb00e580 00:07:17.366 [2024-07-11 16:22:54.045162] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:17.366 [2024-07-11 16:22:54.045293] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffceb00e580 00:07:17.366 [2024-07-11 16:22:54.045421] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:17.366 [2024-07-11 16:22:54.045561] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffceb00e580 00:07:17.366 [2024-07-11 16:22:54.045698] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:07:17.366 [2024-07-11 16:22:54.045850] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffceb00e580 00:07:17.366 passed 00:07:17.366 Test: for_each_channel_and_thread_exit_race ...passed 00:07:17.366 Test: for_each_thread_and_thread_exit_race ...passed 00:07:17.366 00:07:17.366 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.366 suites 1 1 n/a 0 0 00:07:17.366 tests 20 20 20 0 0 00:07:17.366 asserts 409 409 409 0 n/a 00:07:17.366 00:07:17.366 Elapsed time = 0.050 seconds 00:07:17.366 00:07:17.366 real 0m0.094s 00:07:17.366 user 0m0.069s 00:07:17.366 sys 0m0.021s 00:07:17.366 16:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.366 16:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.366 ************************************ 00:07:17.366 END TEST unittest_thread 00:07:17.366 ************************************ 00:07:17.366 16:22:54 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:17.366 16:22:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.366 16:22:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.366 16:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.366 ************************************ 00:07:17.366 START TEST unittest_iobuf 00:07:17.366 ************************************ 00:07:17.366 16:22:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:17.366 00:07:17.366 00:07:17.366 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.366 http://cunit.sourceforge.net/ 00:07:17.366 00:07:17.366 00:07:17.366 Suite: io_channel 00:07:17.366 Test: iobuf ...passed 00:07:17.366 Test: iobuf_cache ...[2024-07-11 16:22:54.150840] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:17.366 [2024-07-11 16:22:54.151243] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:17.366 [2024-07-11 16:22:54.151473] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:07:17.366 [2024-07-11 16:22:54.151625] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:17.366 [2024-07-11 16:22:54.151793] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:17.366 [2024-07-11 16:22:54.151926] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:17.366 passed 00:07:17.366 00:07:17.366 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.366 suites 1 1 n/a 0 0 00:07:17.366 tests 2 2 2 0 0 00:07:17.366 asserts 107 107 107 0 n/a 00:07:17.366 00:07:17.366 Elapsed time = 0.006 seconds 00:07:17.366 00:07:17.366 real 0m0.036s 00:07:17.366 user 0m0.015s 00:07:17.366 sys 0m0.019s 00:07:17.366 16:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.367 16:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.367 ************************************ 00:07:17.367 END TEST unittest_iobuf 00:07:17.367 ************************************ 00:07:17.626 16:22:54 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:07:17.626 16:22:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:17.626 16:22:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.626 16:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:17.626 ************************************ 00:07:17.626 START TEST unittest_util 00:07:17.626 ************************************ 00:07:17.626 00:07:17.626 00:07:17.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.626 http://cunit.sourceforge.net/ 00:07:17.626 00:07:17.626 00:07:17.626 Suite: base64 00:07:17.626 Test: test_base64_get_encoded_strlen ...passed 00:07:17.626 Test: test_base64_get_decoded_len ...passed 00:07:17.626 Test: test_base64_encode ...passed 00:07:17.626 Test: test_base64_decode ...passed 00:07:17.626 Test: test_base64_urlsafe_encode ...passed 00:07:17.626 Test: test_base64_urlsafe_decode ...passed 00:07:17.626 00:07:17.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.626 suites 1 1 n/a 0 0 00:07:17.626 tests 6 6 6 0 0 00:07:17.626 asserts 112 112 112 0 n/a 00:07:17.626 00:07:17.626 Elapsed time = 0.000 seconds 00:07:17.626 16:22:54 -- common/autotest_common.sh@1104 -- # unittest_util 00:07:17.626 16:22:54 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:07:17.626 16:22:54 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:07:17.626 00:07:17.626 00:07:17.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.626 http://cunit.sourceforge.net/ 00:07:17.626 00:07:17.626 00:07:17.626 Suite: bit_array 00:07:17.626 Test: test_1bit ...passed 00:07:17.626 Test: test_64bit ...passed 00:07:17.626 Test: test_find ...passed 00:07:17.626 Test: test_resize ...passed 00:07:17.626 Test: test_errors ...passed 00:07:17.626 Test: test_count ...passed 00:07:17.626 Test: test_mask_store_load ...passed 00:07:17.626 Test: test_mask_clear ...passed 00:07:17.626 00:07:17.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.626 suites 1 1 n/a 0 0 00:07:17.626 tests 8 8 8 0 0 00:07:17.626 asserts 5075 5075 5075 0 n/a 00:07:17.626 00:07:17.626 Elapsed time = 0.002 seconds 00:07:17.626 16:22:54 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:07:17.626 00:07:17.626 00:07:17.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.626 http://cunit.sourceforge.net/ 00:07:17.626 00:07:17.626 00:07:17.626 Suite: cpuset 00:07:17.626 Test: test_cpuset ...passed 00:07:17.626 Test: test_cpuset_parse ...[2024-07-11 16:22:54.277519] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:07:17.626 [2024-07-11 16:22:54.277914] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:07:17.626 [2024-07-11 16:22:54.278035] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:07:17.626 [2024-07-11 16:22:54.278138] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:07:17.626 [2024-07-11 16:22:54.278186] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:07:17.626 [2024-07-11 16:22:54.278230] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:07:17.626 [2024-07-11 16:22:54.278263] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:07:17.626 [2024-07-11 16:22:54.278320] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:07:17.626 passed 00:07:17.626 Test: test_cpuset_fmt ...passed 00:07:17.626 00:07:17.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.626 suites 1 1 n/a 0 0 00:07:17.626 tests 3 3 3 0 0 00:07:17.626 asserts 65 65 65 0 n/a 00:07:17.626 00:07:17.626 Elapsed time = 0.003 seconds 00:07:17.626 16:22:54 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:07:17.626 00:07:17.626 00:07:17.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.626 http://cunit.sourceforge.net/ 00:07:17.626 00:07:17.626 00:07:17.626 Suite: crc16 00:07:17.626 Test: test_crc16_t10dif ...passed 00:07:17.626 Test: test_crc16_t10dif_seed ...passed 00:07:17.626 Test: test_crc16_t10dif_copy ...passed 00:07:17.626 00:07:17.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.626 suites 1 1 n/a 0 0 00:07:17.626 tests 3 3 3 0 0 00:07:17.626 asserts 5 5 5 0 n/a 00:07:17.626 00:07:17.626 Elapsed time = 0.000 seconds 00:07:17.626 16:22:54 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:07:17.626 00:07:17.626 00:07:17.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.626 http://cunit.sourceforge.net/ 00:07:17.626 00:07:17.626 00:07:17.626 Suite: crc32_ieee 00:07:17.626 Test: test_crc32_ieee ...passed 00:07:17.626 00:07:17.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.626 suites 1 1 n/a 0 0 00:07:17.626 tests 1 1 1 0 0 00:07:17.626 asserts 1 1 1 0 n/a 00:07:17.626 00:07:17.626 Elapsed time = 0.000 seconds 00:07:17.626 16:22:54 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:07:17.626 00:07:17.626 00:07:17.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.626 http://cunit.sourceforge.net/ 00:07:17.626 00:07:17.626 00:07:17.626 Suite: crc32c 00:07:17.626 Test: test_crc32c ...passed 00:07:17.626 Test: test_crc32c_nvme ...passed 00:07:17.626 00:07:17.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.626 suites 1 1 n/a 0 0 00:07:17.626 tests 2 2 2 0 0 00:07:17.626 asserts 16 16 16 0 n/a 00:07:17.626 00:07:17.626 Elapsed time = 0.001 seconds 00:07:17.626 16:22:54 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:07:17.626 00:07:17.626 00:07:17.626 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.626 http://cunit.sourceforge.net/ 00:07:17.626 00:07:17.626 00:07:17.626 Suite: crc64 00:07:17.626 Test: test_crc64_nvme ...passed 00:07:17.626 00:07:17.626 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.627 suites 1 1 n/a 0 0 00:07:17.627 tests 1 1 1 0 0 00:07:17.627 asserts 4 4 4 0 n/a 00:07:17.627 00:07:17.627 Elapsed time = 0.000 seconds 00:07:17.627 16:22:54 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:07:17.627 00:07:17.627 00:07:17.627 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.627 http://cunit.sourceforge.net/ 00:07:17.627 00:07:17.627 00:07:17.627 Suite: string 00:07:17.627 Test: test_parse_ip_addr ...passed 00:07:17.627 Test: test_str_chomp ...passed 00:07:17.627 Test: test_parse_capacity ...passed 00:07:17.627 Test: test_sprintf_append_realloc ...passed 00:07:17.627 Test: test_strtol ...passed 00:07:17.627 Test: test_strtoll ...passed 00:07:17.627 Test: test_strarray ...passed 00:07:17.627 Test: test_strcpy_replace ...passed 00:07:17.627 00:07:17.627 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.627 suites 1 1 n/a 0 0 00:07:17.627 tests 8 8 8 0 0 00:07:17.627 asserts 161 161 161 0 n/a 00:07:17.627 00:07:17.627 Elapsed time = 0.001 seconds 00:07:17.627 16:22:54 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:07:17.888 00:07:17.888 00:07:17.888 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.888 http://cunit.sourceforge.net/ 00:07:17.888 00:07:17.888 00:07:17.888 Suite: dif 00:07:17.888 Test: dif_generate_and_verify_test ...[2024-07-11 16:22:54.443145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:17.888 [2024-07-11 16:22:54.443859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:17.888 [2024-07-11 16:22:54.444361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:17.888 [2024-07-11 16:22:54.444773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:17.888 [2024-07-11 16:22:54.445240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:17.888 [2024-07-11 16:22:54.445663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:17.888 passed 00:07:17.888 Test: dif_disable_check_test ...[2024-07-11 16:22:54.447189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:17.888 [2024-07-11 16:22:54.447707] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:17.888 [2024-07-11 16:22:54.448129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:17.888 passed 00:07:17.888 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-11 16:22:54.449681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:07:17.888 [2024-07-11 16:22:54.450122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:07:17.888 [2024-07-11 16:22:54.450627] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:07:17.888 [2024-07-11 16:22:54.451138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:07:17.888 [2024-07-11 16:22:54.451668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:17.888 [2024-07-11 16:22:54.452122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:17.888 [2024-07-11 16:22:54.452606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:17.888 [2024-07-11 16:22:54.453139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:17.888 [2024-07-11 16:22:54.453614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:17.888 [2024-07-11 16:22:54.454102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:17.888 [2024-07-11 16:22:54.454581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:17.888 passed 00:07:17.888 Test: dif_apptag_mask_test ...[2024-07-11 16:22:54.455055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:17.888 [2024-07-11 16:22:54.455453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:17.888 passed 00:07:17.888 Test: dif_sec_512_md_0_error_test ...[2024-07-11 16:22:54.455791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:17.888 passed 00:07:17.888 Test: dif_sec_4096_md_0_error_test ...[2024-07-11 16:22:54.455840] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:17.888 passed 00:07:17.888 Test: dif_sec_4100_md_128_error_test ...passed 00:07:17.888 Test: dif_guard_seed_test ...[2024-07-11 16:22:54.455882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:17.888 [2024-07-11 16:22:54.455946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:17.888 [2024-07-11 16:22:54.455986] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:17.888 passed 00:07:17.888 Test: dif_guard_value_test ...passed 00:07:17.888 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:07:17.888 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:07:17.888 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:17.888 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:17.888 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:17.888 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:07:17.888 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:17.888 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:17.888 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:07:17.888 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:17.888 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:07:17.888 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:07:17.888 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:17.888 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:17.888 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:17.888 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:17.888 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:17.888 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:17.888 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-11 16:22:54.508638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=bd4c, Actual=fd4c 00:07:17.888 [2024-07-11 16:22:54.511595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=be21, Actual=fe21 00:07:17.888 [2024-07-11 16:22:54.514663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.888 [2024-07-11 16:22:54.517745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.888 [2024-07-11 16:22:54.520745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:17.888 [2024-07-11 16:22:54.523630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:17.888 [2024-07-11 16:22:54.527211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fd4c, Actual=e457 00:07:17.888 [2024-07-11 16:22:54.530522] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fe21, Actual=d283 00:07:17.888 [2024-07-11 16:22:54.533966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=5ab753ed, Actual=1ab753ed 00:07:17.888 [2024-07-11 16:22:54.537791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=78574660, Actual=38574660 00:07:17.888 [2024-07-11 16:22:54.540979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.888 [2024-07-11 16:22:54.542776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.544668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000000000000060 00:07:17.889 [2024-07-11 16:22:54.546502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000000000000060 00:07:17.889 [2024-07-11 16:22:54.548292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab753ed, Actual=1d005151 00:07:17.889 [2024-07-11 16:22:54.549960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=38574660, Actual=cb3078b6 00:07:17.889 [2024-07-11 16:22:54.551590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.889 [2024-07-11 16:22:54.553556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:07:17.889 [2024-07-11 16:22:54.555355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.557318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.559104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:17.889 [2024-07-11 16:22:54.561030] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:17.889 [2024-07-11 16:22:54.562830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc20d3, Actual=fb1a045bd0617ffb 00:07:17.889 [2024-07-11 16:22:54.564504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=88010a2d4837a266, Actual=2dcc03a84ceeccf7 00:07:17.889 passed 00:07:17.889 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-11 16:22:54.565595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:17.889 [2024-07-11 16:22:54.565876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:07:17.889 [2024-07-11 16:22:54.566092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.566328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.566581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.889 [2024-07-11 16:22:54.566803] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.889 [2024-07-11 16:22:54.567029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e457 00:07:17.889 [2024-07-11 16:22:54.567157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d283 00:07:17.889 [2024-07-11 16:22:54.567284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:07:17.889 [2024-07-11 16:22:54.567502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:07:17.889 [2024-07-11 16:22:54.567737] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.567964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.568202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.889 [2024-07-11 16:22:54.568433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.889 [2024-07-11 16:22:54.568676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1d005151 00:07:17.889 [2024-07-11 16:22:54.568794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=cb3078b6 00:07:17.889 [2024-07-11 16:22:54.568960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.889 [2024-07-11 16:22:54.569188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:07:17.889 [2024-07-11 16:22:54.569428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.569641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.569863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.889 [2024-07-11 16:22:54.570076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.889 [2024-07-11 16:22:54.570307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fb1a045bd0617ffb 00:07:17.889 [2024-07-11 16:22:54.570435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2dcc03a84ceeccf7 00:07:17.889 passed 00:07:17.889 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-11 16:22:54.570596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:17.889 [2024-07-11 16:22:54.570819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:07:17.889 [2024-07-11 16:22:54.571034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.571254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.571489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.889 [2024-07-11 16:22:54.571712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.889 [2024-07-11 16:22:54.571928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e457 00:07:17.889 [2024-07-11 16:22:54.572052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d283 00:07:17.889 [2024-07-11 16:22:54.572205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:07:17.889 [2024-07-11 16:22:54.572450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:07:17.889 [2024-07-11 16:22:54.572686] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.572906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.573135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.889 [2024-07-11 16:22:54.573356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.889 [2024-07-11 16:22:54.573574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1d005151 00:07:17.889 [2024-07-11 16:22:54.573695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=cb3078b6 00:07:17.889 [2024-07-11 16:22:54.573834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.889 [2024-07-11 16:22:54.574049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:07:17.889 [2024-07-11 16:22:54.574270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.574493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.574722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.889 [2024-07-11 16:22:54.574938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.889 [2024-07-11 16:22:54.575177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fb1a045bd0617ffb 00:07:17.889 [2024-07-11 16:22:54.575299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2dcc03a84ceeccf7 00:07:17.889 passed 00:07:17.889 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-11 16:22:54.575459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:17.889 [2024-07-11 16:22:54.575694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:07:17.889 [2024-07-11 16:22:54.575917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.889 [2024-07-11 16:22:54.576130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.576417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.890 [2024-07-11 16:22:54.576662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.890 [2024-07-11 16:22:54.576900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e457 00:07:17.890 [2024-07-11 16:22:54.577052] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d283 00:07:17.890 [2024-07-11 16:22:54.577192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:07:17.890 [2024-07-11 16:22:54.577429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:07:17.890 [2024-07-11 16:22:54.577670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.577897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.578116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.890 [2024-07-11 16:22:54.578340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.890 [2024-07-11 16:22:54.578563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1d005151 00:07:17.890 [2024-07-11 16:22:54.578696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=cb3078b6 00:07:17.890 [2024-07-11 16:22:54.578827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.890 [2024-07-11 16:22:54.579049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:07:17.890 [2024-07-11 16:22:54.579264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.579487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.579710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.890 [2024-07-11 16:22:54.579932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.890 [2024-07-11 16:22:54.580181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fb1a045bd0617ffb 00:07:17.890 [2024-07-11 16:22:54.580320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2dcc03a84ceeccf7 00:07:17.890 passed 00:07:17.890 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-11 16:22:54.580473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:17.890 [2024-07-11 16:22:54.580711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:07:17.890 [2024-07-11 16:22:54.580944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.581169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.581406] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.890 [2024-07-11 16:22:54.581621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.890 [2024-07-11 16:22:54.581841] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e457 00:07:17.890 [2024-07-11 16:22:54.581957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d283 00:07:17.890 passed 00:07:17.890 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-11 16:22:54.582120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:07:17.890 [2024-07-11 16:22:54.582347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:07:17.890 [2024-07-11 16:22:54.582581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.582795] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.583016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.890 [2024-07-11 16:22:54.583230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.890 [2024-07-11 16:22:54.583452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1d005151 00:07:17.890 [2024-07-11 16:22:54.583574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=cb3078b6 00:07:17.890 [2024-07-11 16:22:54.583729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.890 [2024-07-11 16:22:54.583952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:07:17.890 [2024-07-11 16:22:54.584181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.584419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.584656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.890 [2024-07-11 16:22:54.584877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.890 [2024-07-11 16:22:54.585139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fb1a045bd0617ffb 00:07:17.890 [2024-07-11 16:22:54.585271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2dcc03a84ceeccf7 00:07:17.890 passed 00:07:17.890 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-11 16:22:54.585435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:17.890 [2024-07-11 16:22:54.585659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:07:17.890 [2024-07-11 16:22:54.585880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.586100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.586337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.890 [2024-07-11 16:22:54.586552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.890 [2024-07-11 16:22:54.586772] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e457 00:07:17.890 [2024-07-11 16:22:54.586888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=d283 00:07:17.890 passed 00:07:17.890 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-11 16:22:54.587051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:07:17.890 [2024-07-11 16:22:54.587270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:07:17.890 [2024-07-11 16:22:54.587503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.587725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.587949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.890 [2024-07-11 16:22:54.588164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.890 [2024-07-11 16:22:54.588417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1d005151 00:07:17.890 [2024-07-11 16:22:54.588541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=cb3078b6 00:07:17.890 [2024-07-11 16:22:54.588722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.890 [2024-07-11 16:22:54.588954] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=c8010a2d4837a266, Actual=88010a2d4837a266 00:07:17.890 [2024-07-11 16:22:54.589180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.890 [2024-07-11 16:22:54.589401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.589622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.891 [2024-07-11 16:22:54.589836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.891 [2024-07-11 16:22:54.590070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fb1a045bd0617ffb 00:07:17.891 [2024-07-11 16:22:54.590198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=2dcc03a84ceeccf7 00:07:17.891 passed 00:07:17.891 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:07:17.891 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:17.891 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:17.891 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:17.891 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:17.891 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:17.891 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:17.891 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:17.891 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:17.891 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-11 16:22:54.620742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=bd4c, Actual=fd4c 00:07:17.891 [2024-07-11 16:22:54.621712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=2490, Actual=6490 00:07:17.891 [2024-07-11 16:22:54.622618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.623516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.624459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:17.891 [2024-07-11 16:22:54.625425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:17.891 [2024-07-11 16:22:54.626364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fd4c, Actual=e457 00:07:17.891 [2024-07-11 16:22:54.627245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=e4c1, Actual=c863 00:07:17.891 [2024-07-11 16:22:54.628167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=5ab753ed, Actual=1ab753ed 00:07:17.891 [2024-07-11 16:22:54.629137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a6eade94, Actual=e6eade94 00:07:17.891 [2024-07-11 16:22:54.630074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.630997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.631890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000000000000060 00:07:17.891 [2024-07-11 16:22:54.632833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000000000000060 00:07:17.891 [2024-07-11 16:22:54.633765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab753ed, Actual=1d005151 00:07:17.891 [2024-07-11 16:22:54.634658] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=741688fe, Actual=8771b628 00:07:17.891 [2024-07-11 16:22:54.635542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.891 [2024-07-11 16:22:54.636492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=ce1f17a82b9e7aa4, Actual=8e1f17a82b9e7aa4 00:07:17.891 [2024-07-11 16:22:54.637456] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.638375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.639271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:17.891 [2024-07-11 16:22:54.640178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:17.891 [2024-07-11 16:22:54.641130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc20d3, Actual=fb1a045bd0617ffb 00:07:17.891 passed 00:07:17.891 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-11 16:22:54.642084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=4c87c8f68d0ed55f, Actual=e94ac17389d7bbce 00:07:17.891 [2024-07-11 16:22:54.642389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:17.891 [2024-07-11 16:22:54.642592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed47, Actual=ad47 00:07:17.891 [2024-07-11 16:22:54.642815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.643007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.643235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.891 [2024-07-11 16:22:54.643456] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.891 [2024-07-11 16:22:54.643662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e457 00:07:17.891 [2024-07-11 16:22:54.643853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=1b4 00:07:17.891 [2024-07-11 16:22:54.644039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:07:17.891 [2024-07-11 16:22:54.644233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=385a16c6, Actual=785a16c6 00:07:17.891 [2024-07-11 16:22:54.644481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.644692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.644899] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.891 [2024-07-11 16:22:54.645123] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:17.891 [2024-07-11 16:22:54.645342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1d005151 00:07:17.891 [2024-07-11 16:22:54.645548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=19c17e7a 00:07:17.891 [2024-07-11 16:22:54.645775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:17.891 [2024-07-11 16:22:54.645968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bafef545f04a9f59, Actual=fafef545f04a9f59 00:07:17.891 [2024-07-11 16:22:54.646170] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.646398] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:17.891 [2024-07-11 16:22:54.646629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.891 [2024-07-11 16:22:54.646822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:17.891 [2024-07-11 16:22:54.647038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fb1a045bd0617ffb 00:07:17.891 [2024-07-11 16:22:54.647238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=9dab239e52035e33 00:07:17.891 passed 00:07:17.891 Test: dix_sec_512_md_0_error ...passed 00:07:17.891 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-11 16:22:54.647288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:17.891 passed 00:07:17.891 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:17.891 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:17.892 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:17.892 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:17.892 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:17.892 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:17.892 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:17.892 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:17.892 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-11 16:22:54.681085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=bd4c, Actual=fd4c 00:07:17.892 [2024-07-11 16:22:54.682022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=2490, Actual=6490 00:07:17.892 [2024-07-11 16:22:54.682944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.892 [2024-07-11 16:22:54.683859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.892 [2024-07-11 16:22:54.684822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:17.892 [2024-07-11 16:22:54.685789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:17.892 [2024-07-11 16:22:54.686733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fd4c, Actual=e457 00:07:17.892 [2024-07-11 16:22:54.687662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=e4c1, Actual=c863 00:07:17.892 [2024-07-11 16:22:54.688572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=5ab753ed, Actual=1ab753ed 00:07:17.892 [2024-07-11 16:22:54.689585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a6eade94, Actual=e6eade94 00:07:17.892 [2024-07-11 16:22:54.690552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.892 [2024-07-11 16:22:54.691484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:17.892 [2024-07-11 16:22:54.692478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000000000000060 00:07:18.151 [2024-07-11 16:22:54.693486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000000000000060 00:07:18.151 [2024-07-11 16:22:54.694395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab753ed, Actual=1d005151 00:07:18.151 [2024-07-11 16:22:54.695331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=741688fe, Actual=8771b628 00:07:18.151 [2024-07-11 16:22:54.696256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:18.151 [2024-07-11 16:22:54.697205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=ce1f17a82b9e7aa4, Actual=8e1f17a82b9e7aa4 00:07:18.151 [2024-07-11 16:22:54.698122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:18.151 [2024-07-11 16:22:54.699033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=4088 00:07:18.151 [2024-07-11 16:22:54.699983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:18.151 [2024-07-11 16:22:54.700917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=40000060 00:07:18.151 [2024-07-11 16:22:54.701859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc20d3, Actual=fb1a045bd0617ffb 00:07:18.151 passed 00:07:18.151 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-11 16:22:54.702764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=4c87c8f68d0ed55f, Actual=e94ac17389d7bbce 00:07:18.151 [2024-07-11 16:22:54.703091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:18.151 [2024-07-11 16:22:54.703302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed47, Actual=ad47 00:07:18.151 [2024-07-11 16:22:54.703513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:18.151 [2024-07-11 16:22:54.703735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:18.151 [2024-07-11 16:22:54.703959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:18.151 [2024-07-11 16:22:54.704160] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:18.151 [2024-07-11 16:22:54.704390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e457 00:07:18.151 [2024-07-11 16:22:54.704590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=1b4 00:07:18.151 [2024-07-11 16:22:54.704813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:07:18.151 [2024-07-11 16:22:54.705018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=385a16c6, Actual=785a16c6 00:07:18.151 [2024-07-11 16:22:54.705250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:18.151 [2024-07-11 16:22:54.705457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:18.151 [2024-07-11 16:22:54.705668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:18.151 [2024-07-11 16:22:54.705866] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000000000000058 00:07:18.151 [2024-07-11 16:22:54.706060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=1d005151 00:07:18.151 [2024-07-11 16:22:54.706259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=19c17e7a 00:07:18.151 [2024-07-11 16:22:54.706461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=e576a7728ecc20d3, Actual=a576a7728ecc20d3 00:07:18.151 [2024-07-11 16:22:54.706659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bafef545f04a9f59, Actual=fafef545f04a9f59 00:07:18.151 [2024-07-11 16:22:54.706849] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:18.151 [2024-07-11 16:22:54.707064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:18.151 [2024-07-11 16:22:54.707262] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:18.151 [2024-07-11 16:22:54.707470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:07:18.151 [2024-07-11 16:22:54.707681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=fb1a045bd0617ffb 00:07:18.151 [2024-07-11 16:22:54.707875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=9dab239e52035e33 00:07:18.151 passed 00:07:18.151 Test: set_md_interleave_iovs_test ...passed 00:07:18.151 Test: set_md_interleave_iovs_split_test ...passed 00:07:18.151 Test: dif_generate_stream_pi_16_test ...passed 00:07:18.151 Test: dif_generate_stream_test ...passed 00:07:18.151 Test: set_md_interleave_iovs_alignment_test ...[2024-07-11 16:22:54.713497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:07:18.151 passed 00:07:18.151 Test: dif_generate_split_test ...passed 00:07:18.151 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:07:18.151 Test: dif_verify_split_test ...passed 00:07:18.151 Test: dif_verify_stream_multi_segments_test ...passed 00:07:18.151 Test: update_crc32c_pi_16_test ...passed 00:07:18.151 Test: update_crc32c_test ...passed 00:07:18.151 Test: dif_update_crc32c_split_test ...passed 00:07:18.151 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:07:18.151 Test: get_range_with_md_test ...passed 00:07:18.151 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:07:18.151 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:07:18.151 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:18.151 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:07:18.151 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:07:18.151 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:18.151 Test: dif_generate_and_verify_unmap_test ...passed 00:07:18.151 00:07:18.151 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.151 suites 1 1 n/a 0 0 00:07:18.151 tests 79 79 79 0 0 00:07:18.151 asserts 3584 3584 3584 0 n/a 00:07:18.151 00:07:18.151 Elapsed time = 0.305 seconds 00:07:18.151 16:22:54 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:07:18.151 00:07:18.151 00:07:18.151 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.151 http://cunit.sourceforge.net/ 00:07:18.151 00:07:18.151 00:07:18.151 Suite: iov 00:07:18.151 Test: test_single_iov ...passed 00:07:18.151 Test: test_simple_iov ...passed 00:07:18.151 Test: test_complex_iov ...passed 00:07:18.151 Test: test_iovs_to_buf ...passed 00:07:18.152 Test: test_buf_to_iovs ...passed 00:07:18.152 Test: test_memset ...passed 00:07:18.152 Test: test_iov_one ...passed 00:07:18.152 Test: test_iov_xfer ...passed 00:07:18.152 00:07:18.152 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.152 suites 1 1 n/a 0 0 00:07:18.152 tests 8 8 8 0 0 00:07:18.152 asserts 156 156 156 0 n/a 00:07:18.152 00:07:18.152 Elapsed time = 0.000 seconds 00:07:18.152 16:22:54 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:07:18.152 00:07:18.152 00:07:18.152 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.152 http://cunit.sourceforge.net/ 00:07:18.152 00:07:18.152 00:07:18.152 Suite: math 00:07:18.152 Test: test_serial_number_arithmetic ...passed 00:07:18.152 Suite: erase 00:07:18.152 Test: test_memset_s ...passed 00:07:18.152 00:07:18.152 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.152 suites 2 2 n/a 0 0 00:07:18.152 tests 2 2 2 0 0 00:07:18.152 asserts 18 18 18 0 n/a 00:07:18.152 00:07:18.152 Elapsed time = 0.000 seconds 00:07:18.152 16:22:54 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:07:18.152 00:07:18.152 00:07:18.152 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.152 http://cunit.sourceforge.net/ 00:07:18.152 00:07:18.152 00:07:18.152 Suite: pipe 00:07:18.152 Test: test_create_destroy ...passed 00:07:18.152 Test: test_write_get_buffer ...passed 00:07:18.152 Test: test_write_advance ...passed 00:07:18.152 Test: test_read_get_buffer ...passed 00:07:18.152 Test: test_read_advance ...passed 00:07:18.152 Test: test_data ...passed 00:07:18.152 00:07:18.152 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.152 suites 1 1 n/a 0 0 00:07:18.152 tests 6 6 6 0 0 00:07:18.152 asserts 250 250 250 0 n/a 00:07:18.152 00:07:18.152 Elapsed time = 0.000 seconds 00:07:18.152 16:22:54 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:07:18.152 00:07:18.152 00:07:18.152 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.152 http://cunit.sourceforge.net/ 00:07:18.152 00:07:18.152 00:07:18.152 Suite: xor 00:07:18.152 Test: test_xor_gen ...passed 00:07:18.152 00:07:18.152 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.152 suites 1 1 n/a 0 0 00:07:18.152 tests 1 1 1 0 0 00:07:18.152 asserts 17 17 17 0 n/a 00:07:18.152 00:07:18.152 Elapsed time = 0.008 seconds 00:07:18.152 00:07:18.152 real 0m0.674s 00:07:18.152 user 0m0.519s 00:07:18.152 sys 0m0.161s 00:07:18.152 16:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.152 16:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:18.152 ************************************ 00:07:18.152 END TEST unittest_util 00:07:18.152 ************************************ 00:07:18.152 16:22:54 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:18.152 16:22:54 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:18.152 16:22:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.152 16:22:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.152 16:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:18.152 ************************************ 00:07:18.152 START TEST unittest_vhost 00:07:18.152 ************************************ 00:07:18.152 16:22:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:18.152 00:07:18.152 00:07:18.152 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.152 http://cunit.sourceforge.net/ 00:07:18.152 00:07:18.152 00:07:18.152 Suite: vhost_suite 00:07:18.411 Test: desc_to_iov_test ...[2024-07-11 16:22:54.959185] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:07:18.411 passed 00:07:18.411 Test: create_controller_test ...[2024-07-11 16:22:54.963828] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:18.411 [2024-07-11 16:22:54.964078] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:07:18.411 [2024-07-11 16:22:54.964322] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:18.411 [2024-07-11 16:22:54.964517] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:07:18.411 [2024-07-11 16:22:54.964703] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:07:18.411 [2024-07-11 16:22:54.964917] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-07-11 16:22:54.966107] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:07:18.411 passed 00:07:18.411 Test: session_find_by_vid_test ...passed 00:07:18.411 Test: remove_controller_test ...[2024-07-11 16:22:54.968672] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:07:18.411 passed 00:07:18.411 Test: vq_avail_ring_get_test ...passed 00:07:18.411 Test: vq_packed_ring_test ...passed 00:07:18.411 Test: vhost_blk_construct_test ...passed 00:07:18.411 00:07:18.411 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.411 suites 1 1 n/a 0 0 00:07:18.411 tests 7 7 7 0 0 00:07:18.411 asserts 145 145 145 0 n/a 00:07:18.411 00:07:18.411 Elapsed time = 0.012 seconds 00:07:18.411 00:07:18.411 real 0m0.051s 00:07:18.411 user 0m0.019s 00:07:18.411 sys 0m0.030s 00:07:18.411 16:22:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.411 16:22:54 -- common/autotest_common.sh@10 -- # set +x 00:07:18.411 ************************************ 00:07:18.411 END TEST unittest_vhost 00:07:18.411 ************************************ 00:07:18.411 16:22:55 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:18.411 16:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.411 16:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.411 16:22:55 -- common/autotest_common.sh@10 -- # set +x 00:07:18.411 ************************************ 00:07:18.411 START TEST unittest_dma 00:07:18.411 ************************************ 00:07:18.411 16:22:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:18.411 00:07:18.411 00:07:18.411 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.411 http://cunit.sourceforge.net/ 00:07:18.411 00:07:18.411 00:07:18.411 Suite: dma_suite 00:07:18.411 Test: test_dma ...[2024-07-11 16:22:55.051998] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:07:18.411 passed 00:07:18.411 00:07:18.411 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.411 suites 1 1 n/a 0 0 00:07:18.411 tests 1 1 1 0 0 00:07:18.411 asserts 50 50 50 0 n/a 00:07:18.411 00:07:18.411 Elapsed time = 0.001 seconds 00:07:18.411 00:07:18.411 real 0m0.030s 00:07:18.411 user 0m0.027s 00:07:18.411 sys 0m0.003s 00:07:18.411 16:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.411 16:22:55 -- common/autotest_common.sh@10 -- # set +x 00:07:18.411 ************************************ 00:07:18.411 END TEST unittest_dma 00:07:18.411 ************************************ 00:07:18.411 16:22:55 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:07:18.411 16:22:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:18.411 16:22:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.411 16:22:55 -- common/autotest_common.sh@10 -- # set +x 00:07:18.411 ************************************ 00:07:18.411 START TEST unittest_init 00:07:18.411 ************************************ 00:07:18.411 16:22:55 -- common/autotest_common.sh@1104 -- # unittest_init 00:07:18.411 16:22:55 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:07:18.411 00:07:18.411 00:07:18.411 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.411 http://cunit.sourceforge.net/ 00:07:18.411 00:07:18.411 00:07:18.411 Suite: subsystem_suite 00:07:18.411 Test: subsystem_sort_test_depends_on_single ...passed 00:07:18.411 Test: subsystem_sort_test_depends_on_multiple ...passed 00:07:18.411 Test: subsystem_sort_test_missing_dependency ...[2024-07-11 16:22:55.133274] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:07:18.411 [2024-07-11 16:22:55.133623] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:07:18.411 passed 00:07:18.411 00:07:18.411 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.411 suites 1 1 n/a 0 0 00:07:18.411 tests 3 3 3 0 0 00:07:18.411 asserts 20 20 20 0 n/a 00:07:18.411 00:07:18.411 Elapsed time = 0.001 seconds 00:07:18.411 00:07:18.411 real 0m0.034s 00:07:18.411 user 0m0.025s 00:07:18.411 sys 0m0.009s 00:07:18.412 16:22:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.412 16:22:55 -- common/autotest_common.sh@10 -- # set +x 00:07:18.412 ************************************ 00:07:18.412 END TEST unittest_init 00:07:18.412 ************************************ 00:07:18.412 16:22:55 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:07:18.412 16:22:55 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:18.412 16:22:55 -- unit/unittest.sh@290 -- # hostname 00:07:18.412 16:22:55 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:18.670 geninfo: WARNING: invalid characters removed from testname! 00:07:45.210 16:23:21 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:07:50.474 16:23:26 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:53.005 16:23:29 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:56.281 16:23:32 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:58.809 16:23:35 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:01.353 16:23:38 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:03.884 16:23:40 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:06.427 16:23:42 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:06.427 16:23:42 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:06.685 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:06.685 Found 309 entries. 00:08:06.685 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:06.685 Writing .css and .png files. 00:08:06.685 Generating output. 00:08:06.685 Processing file include/linux/virtio_ring.h 00:08:07.251 Processing file include/spdk/bdev_module.h 00:08:07.251 Processing file include/spdk/thread.h 00:08:07.251 Processing file include/spdk/nvme.h 00:08:07.251 Processing file include/spdk/base64.h 00:08:07.251 Processing file include/spdk/nvme_spec.h 00:08:07.251 Processing file include/spdk/util.h 00:08:07.251 Processing file include/spdk/endian.h 00:08:07.251 Processing file include/spdk/histogram_data.h 00:08:07.251 Processing file include/spdk/nvmf_transport.h 00:08:07.251 Processing file include/spdk/mmio.h 00:08:07.251 Processing file include/spdk/trace.h 00:08:07.251 Processing file include/spdk_internal/rdma.h 00:08:07.251 Processing file include/spdk_internal/sgl.h 00:08:07.251 Processing file include/spdk_internal/sock.h 00:08:07.251 Processing file include/spdk_internal/virtio.h 00:08:07.251 Processing file include/spdk_internal/nvme_tcp.h 00:08:07.251 Processing file include/spdk_internal/utf.h 00:08:07.251 Processing file lib/accel/accel.c 00:08:07.251 Processing file lib/accel/accel_sw.c 00:08:07.251 Processing file lib/accel/accel_rpc.c 00:08:07.509 Processing file lib/bdev/part.c 00:08:07.509 Processing file lib/bdev/bdev.c 00:08:07.509 Processing file lib/bdev/bdev_zone.c 00:08:07.509 Processing file lib/bdev/scsi_nvme.c 00:08:07.509 Processing file lib/bdev/bdev_rpc.c 00:08:07.767 Processing file lib/blob/blob_bs_dev.c 00:08:07.767 Processing file lib/blob/zeroes.c 00:08:07.767 Processing file lib/blob/blobstore.c 00:08:07.767 Processing file lib/blob/request.c 00:08:07.767 Processing file lib/blob/blobstore.h 00:08:08.025 Processing file lib/blobfs/tree.c 00:08:08.025 Processing file lib/blobfs/blobfs.c 00:08:08.025 Processing file lib/conf/conf.c 00:08:08.025 Processing file lib/dma/dma.c 00:08:08.284 Processing file lib/env_dpdk/pci_virtio.c 00:08:08.284 Processing file lib/env_dpdk/init.c 00:08:08.284 Processing file lib/env_dpdk/pci_ioat.c 00:08:08.284 Processing file lib/env_dpdk/sigbus_handler.c 00:08:08.284 Processing file lib/env_dpdk/env.c 00:08:08.284 Processing file lib/env_dpdk/pci_dpdk.c 00:08:08.284 Processing file lib/env_dpdk/threads.c 00:08:08.284 Processing file lib/env_dpdk/pci_vmd.c 00:08:08.284 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:08.284 Processing file lib/env_dpdk/pci.c 00:08:08.284 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:08.284 Processing file lib/env_dpdk/memory.c 00:08:08.284 Processing file lib/env_dpdk/pci_event.c 00:08:08.284 Processing file lib/env_dpdk/pci_idxd.c 00:08:08.542 Processing file lib/event/app_rpc.c 00:08:08.542 Processing file lib/event/scheduler_static.c 00:08:08.542 Processing file lib/event/reactor.c 00:08:08.542 Processing file lib/event/app.c 00:08:08.542 Processing file lib/event/log_rpc.c 00:08:09.109 Processing file lib/ftl/ftl_core.h 00:08:09.109 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:09.109 Processing file lib/ftl/ftl_l2p_cache.c 00:08:09.109 Processing file lib/ftl/ftl_trace.c 00:08:09.109 Processing file lib/ftl/ftl_writer.h 00:08:09.109 Processing file lib/ftl/ftl_l2p_flat.c 00:08:09.109 Processing file lib/ftl/ftl_rq.c 00:08:09.109 Processing file lib/ftl/ftl_l2p.c 00:08:09.109 Processing file lib/ftl/ftl_io.h 00:08:09.109 Processing file lib/ftl/ftl_nv_cache.h 00:08:09.109 Processing file lib/ftl/ftl_debug.h 00:08:09.109 Processing file lib/ftl/ftl_core.c 00:08:09.109 Processing file lib/ftl/ftl_sb.c 00:08:09.109 Processing file lib/ftl/ftl_io.c 00:08:09.109 Processing file lib/ftl/ftl_writer.c 00:08:09.109 Processing file lib/ftl/ftl_p2l.c 00:08:09.109 Processing file lib/ftl/ftl_band.c 00:08:09.109 Processing file lib/ftl/ftl_init.c 00:08:09.109 Processing file lib/ftl/ftl_band_ops.c 00:08:09.109 Processing file lib/ftl/ftl_debug.c 00:08:09.109 Processing file lib/ftl/ftl_reloc.c 00:08:09.109 Processing file lib/ftl/ftl_layout.c 00:08:09.109 Processing file lib/ftl/ftl_nv_cache.c 00:08:09.109 Processing file lib/ftl/ftl_band.h 00:08:09.109 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:09.109 Processing file lib/ftl/base/ftl_base_dev.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:09.367 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:09.367 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:09.367 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:09.625 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:09.625 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:09.625 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:09.625 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:09.885 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:09.885 Processing file lib/ftl/utils/ftl_mempool.c 00:08:09.885 Processing file lib/ftl/utils/ftl_df.h 00:08:09.885 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:09.885 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:09.885 Processing file lib/ftl/utils/ftl_md.c 00:08:09.885 Processing file lib/ftl/utils/ftl_conf.c 00:08:09.885 Processing file lib/ftl/utils/ftl_property.c 00:08:09.885 Processing file lib/ftl/utils/ftl_property.h 00:08:09.885 Processing file lib/idxd/idxd_internal.h 00:08:09.885 Processing file lib/idxd/idxd.c 00:08:09.885 Processing file lib/idxd/idxd_user.c 00:08:09.885 Processing file lib/init/rpc.c 00:08:09.885 Processing file lib/init/json_config.c 00:08:09.885 Processing file lib/init/subsystem_rpc.c 00:08:09.885 Processing file lib/init/subsystem.c 00:08:10.144 Processing file lib/ioat/ioat.c 00:08:10.144 Processing file lib/ioat/ioat_internal.h 00:08:10.402 Processing file lib/iscsi/iscsi.h 00:08:10.402 Processing file lib/iscsi/iscsi_rpc.c 00:08:10.402 Processing file lib/iscsi/iscsi.c 00:08:10.402 Processing file lib/iscsi/task.h 00:08:10.402 Processing file lib/iscsi/md5.c 00:08:10.402 Processing file lib/iscsi/portal_grp.c 00:08:10.402 Processing file lib/iscsi/tgt_node.c 00:08:10.402 Processing file lib/iscsi/iscsi_subsystem.c 00:08:10.402 Processing file lib/iscsi/conn.c 00:08:10.402 Processing file lib/iscsi/init_grp.c 00:08:10.403 Processing file lib/iscsi/task.c 00:08:10.403 Processing file lib/iscsi/param.c 00:08:10.661 Processing file lib/json/json_write.c 00:08:10.662 Processing file lib/json/json_parse.c 00:08:10.662 Processing file lib/json/json_util.c 00:08:10.662 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:10.662 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:10.662 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:10.662 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:10.921 Processing file lib/log/log.c 00:08:10.921 Processing file lib/log/log_flags.c 00:08:10.921 Processing file lib/log/log_deprecated.c 00:08:10.921 Processing file lib/lvol/lvol.c 00:08:10.921 Processing file lib/nbd/nbd.c 00:08:10.921 Processing file lib/nbd/nbd_rpc.c 00:08:11.179 Processing file lib/notify/notify_rpc.c 00:08:11.179 Processing file lib/notify/notify.c 00:08:11.748 Processing file lib/nvme/nvme_fabric.c 00:08:11.748 Processing file lib/nvme/nvme_discovery.c 00:08:11.748 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:11.748 Processing file lib/nvme/nvme_ctrlr.c 00:08:11.748 Processing file lib/nvme/nvme_qpair.c 00:08:11.748 Processing file lib/nvme/nvme_rdma.c 00:08:11.748 Processing file lib/nvme/nvme_ns_cmd.c 00:08:11.748 Processing file lib/nvme/nvme_tcp.c 00:08:11.748 Processing file lib/nvme/nvme.c 00:08:11.748 Processing file lib/nvme/nvme_opal.c 00:08:11.748 Processing file lib/nvme/nvme_quirks.c 00:08:11.748 Processing file lib/nvme/nvme_cuse.c 00:08:11.748 Processing file lib/nvme/nvme_pcie_internal.h 00:08:11.748 Processing file lib/nvme/nvme_transport.c 00:08:11.748 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:11.748 Processing file lib/nvme/nvme_pcie.c 00:08:11.748 Processing file lib/nvme/nvme_pcie_common.c 00:08:11.748 Processing file lib/nvme/nvme_poll_group.c 00:08:11.748 Processing file lib/nvme/nvme_zns.c 00:08:11.748 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:11.748 Processing file lib/nvme/nvme_io_msg.c 00:08:11.748 Processing file lib/nvme/nvme_internal.h 00:08:11.748 Processing file lib/nvme/nvme_vfio_user.c 00:08:11.748 Processing file lib/nvme/nvme_ns.c 00:08:12.316 Processing file lib/nvmf/nvmf_rpc.c 00:08:12.316 Processing file lib/nvmf/rdma.c 00:08:12.316 Processing file lib/nvmf/tcp.c 00:08:12.316 Processing file lib/nvmf/ctrlr_bdev.c 00:08:12.316 Processing file lib/nvmf/subsystem.c 00:08:12.316 Processing file lib/nvmf/transport.c 00:08:12.316 Processing file lib/nvmf/ctrlr.c 00:08:12.316 Processing file lib/nvmf/ctrlr_discovery.c 00:08:12.316 Processing file lib/nvmf/nvmf.c 00:08:12.316 Processing file lib/nvmf/nvmf_internal.h 00:08:12.575 Processing file lib/rdma/common.c 00:08:12.575 Processing file lib/rdma/rdma_verbs.c 00:08:12.575 Processing file lib/rpc/rpc.c 00:08:12.835 Processing file lib/scsi/scsi_rpc.c 00:08:12.835 Processing file lib/scsi/scsi_bdev.c 00:08:12.835 Processing file lib/scsi/dev.c 00:08:12.835 Processing file lib/scsi/scsi_pr.c 00:08:12.835 Processing file lib/scsi/scsi.c 00:08:12.835 Processing file lib/scsi/lun.c 00:08:12.835 Processing file lib/scsi/task.c 00:08:12.835 Processing file lib/scsi/port.c 00:08:12.835 Processing file lib/sock/sock_rpc.c 00:08:12.835 Processing file lib/sock/sock.c 00:08:13.093 Processing file lib/thread/thread.c 00:08:13.093 Processing file lib/thread/iobuf.c 00:08:13.093 Processing file lib/trace/trace_rpc.c 00:08:13.093 Processing file lib/trace/trace.c 00:08:13.093 Processing file lib/trace/trace_flags.c 00:08:13.093 Processing file lib/trace_parser/trace.cpp 00:08:13.352 Processing file lib/ut/ut.c 00:08:13.352 Processing file lib/ut_mock/mock.c 00:08:13.611 Processing file lib/util/dif.c 00:08:13.611 Processing file lib/util/xor.c 00:08:13.611 Processing file lib/util/bit_array.c 00:08:13.611 Processing file lib/util/crc32.c 00:08:13.611 Processing file lib/util/crc32_ieee.c 00:08:13.611 Processing file lib/util/strerror_tls.c 00:08:13.611 Processing file lib/util/math.c 00:08:13.611 Processing file lib/util/pipe.c 00:08:13.611 Processing file lib/util/iov.c 00:08:13.611 Processing file lib/util/hexlify.c 00:08:13.611 Processing file lib/util/cpuset.c 00:08:13.611 Processing file lib/util/zipf.c 00:08:13.611 Processing file lib/util/fd_group.c 00:08:13.611 Processing file lib/util/base64.c 00:08:13.611 Processing file lib/util/uuid.c 00:08:13.611 Processing file lib/util/string.c 00:08:13.611 Processing file lib/util/crc32c.c 00:08:13.611 Processing file lib/util/crc64.c 00:08:13.611 Processing file lib/util/file.c 00:08:13.611 Processing file lib/util/fd.c 00:08:13.611 Processing file lib/util/crc16.c 00:08:13.869 Processing file lib/vfio_user/host/vfio_user_pci.c 00:08:13.869 Processing file lib/vfio_user/host/vfio_user.c 00:08:13.869 Processing file lib/vhost/vhost_rpc.c 00:08:13.869 Processing file lib/vhost/vhost.c 00:08:13.869 Processing file lib/vhost/rte_vhost_user.c 00:08:13.869 Processing file lib/vhost/vhost_blk.c 00:08:13.869 Processing file lib/vhost/vhost_internal.h 00:08:13.869 Processing file lib/vhost/vhost_scsi.c 00:08:14.128 Processing file lib/virtio/virtio.c 00:08:14.128 Processing file lib/virtio/virtio_pci.c 00:08:14.128 Processing file lib/virtio/virtio_vhost_user.c 00:08:14.128 Processing file lib/virtio/virtio_vfio_user.c 00:08:14.128 Processing file lib/vmd/vmd.c 00:08:14.128 Processing file lib/vmd/led.c 00:08:14.387 Processing file module/accel/dsa/accel_dsa_rpc.c 00:08:14.387 Processing file module/accel/dsa/accel_dsa.c 00:08:14.387 Processing file module/accel/error/accel_error_rpc.c 00:08:14.387 Processing file module/accel/error/accel_error.c 00:08:14.387 Processing file module/accel/iaa/accel_iaa_rpc.c 00:08:14.387 Processing file module/accel/iaa/accel_iaa.c 00:08:14.645 Processing file module/accel/ioat/accel_ioat.c 00:08:14.645 Processing file module/accel/ioat/accel_ioat_rpc.c 00:08:14.645 Processing file module/bdev/aio/bdev_aio.c 00:08:14.645 Processing file module/bdev/aio/bdev_aio_rpc.c 00:08:14.645 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:08:14.645 Processing file module/bdev/delay/vbdev_delay.c 00:08:14.903 Processing file module/bdev/error/vbdev_error_rpc.c 00:08:14.903 Processing file module/bdev/error/vbdev_error.c 00:08:14.903 Processing file module/bdev/ftl/bdev_ftl.c 00:08:14.903 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:08:14.903 Processing file module/bdev/gpt/vbdev_gpt.c 00:08:14.903 Processing file module/bdev/gpt/gpt.c 00:08:14.903 Processing file module/bdev/gpt/gpt.h 00:08:15.161 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:08:15.161 Processing file module/bdev/iscsi/bdev_iscsi.c 00:08:15.161 Processing file module/bdev/lvol/vbdev_lvol.c 00:08:15.162 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:08:15.421 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:08:15.421 Processing file module/bdev/malloc/bdev_malloc.c 00:08:15.421 Processing file module/bdev/null/bdev_null.c 00:08:15.421 Processing file module/bdev/null/bdev_null_rpc.c 00:08:15.989 Processing file module/bdev/nvme/bdev_nvme.c 00:08:15.989 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:08:15.989 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:08:15.989 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:08:15.989 Processing file module/bdev/nvme/bdev_mdns_client.c 00:08:15.989 Processing file module/bdev/nvme/nvme_rpc.c 00:08:15.989 Processing file module/bdev/nvme/vbdev_opal.c 00:08:15.989 Processing file module/bdev/passthru/vbdev_passthru.c 00:08:15.989 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:08:16.247 Processing file module/bdev/raid/bdev_raid.h 00:08:16.247 Processing file module/bdev/raid/bdev_raid.c 00:08:16.247 Processing file module/bdev/raid/raid1.c 00:08:16.247 Processing file module/bdev/raid/bdev_raid_rpc.c 00:08:16.247 Processing file module/bdev/raid/concat.c 00:08:16.247 Processing file module/bdev/raid/bdev_raid_sb.c 00:08:16.247 Processing file module/bdev/raid/raid0.c 00:08:16.247 Processing file module/bdev/raid/raid5f.c 00:08:16.247 Processing file module/bdev/split/vbdev_split.c 00:08:16.247 Processing file module/bdev/split/vbdev_split_rpc.c 00:08:16.247 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:08:16.247 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:08:16.247 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:08:16.530 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:08:16.530 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:08:16.530 Processing file module/blob/bdev/blob_bdev.c 00:08:16.530 Processing file module/blobfs/bdev/blobfs_bdev.c 00:08:16.530 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:08:16.530 Processing file module/env_dpdk/env_dpdk_rpc.c 00:08:16.792 Processing file module/event/subsystems/accel/accel.c 00:08:16.792 Processing file module/event/subsystems/bdev/bdev.c 00:08:16.792 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:08:16.792 Processing file module/event/subsystems/iobuf/iobuf.c 00:08:16.792 Processing file module/event/subsystems/iscsi/iscsi.c 00:08:17.050 Processing file module/event/subsystems/nbd/nbd.c 00:08:17.050 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:08:17.050 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:08:17.050 Processing file module/event/subsystems/scheduler/scheduler.c 00:08:17.050 Processing file module/event/subsystems/scsi/scsi.c 00:08:17.050 Processing file module/event/subsystems/sock/sock.c 00:08:17.309 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:08:17.309 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:08:17.309 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:08:17.309 Processing file module/event/subsystems/vmd/vmd.c 00:08:17.309 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:08:17.568 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:08:17.568 Processing file module/scheduler/gscheduler/gscheduler.c 00:08:17.568 Processing file module/sock/sock_kernel.h 00:08:17.568 Processing file module/sock/posix/posix.c 00:08:17.568 Writing directory view page. 00:08:17.568 Overall coverage rate: 00:08:17.568 lines......: 39.1% (39263 of 100392 lines) 00:08:17.568 functions..: 42.8% (3587 of 8384 functions) 00:08:17.568 00:08:17.568 00:08:17.568 16:23:54 -- unit/unittest.sh@302 -- # set +x 00:08:17.568 ===================== 00:08:17.568 All unit tests passed 00:08:17.568 ===================== 00:08:17.568 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:17.568 00:08:17.568 00:08:17.568 00:08:17.568 real 3m11.179s 00:08:17.568 user 2m45.600s 00:08:17.568 sys 0m14.334s 00:08:17.568 16:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.568 16:23:54 -- common/autotest_common.sh@10 -- # set +x 00:08:17.568 ************************************ 00:08:17.568 END TEST unittest 00:08:17.568 ************************************ 00:08:17.826 16:23:54 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:08:17.826 16:23:54 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:17.826 16:23:54 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:08:17.826 16:23:54 -- spdk/autotest.sh@173 -- # timing_enter lib 00:08:17.826 16:23:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:17.826 16:23:54 -- common/autotest_common.sh@10 -- # set +x 00:08:17.826 16:23:54 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:17.827 16:23:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:17.827 16:23:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.827 16:23:54 -- common/autotest_common.sh@10 -- # set +x 00:08:17.827 ************************************ 00:08:17.827 START TEST env 00:08:17.827 ************************************ 00:08:17.827 16:23:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:17.827 * Looking for test storage... 00:08:17.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:17.827 16:23:54 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:17.827 16:23:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:17.827 16:23:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.827 16:23:54 -- common/autotest_common.sh@10 -- # set +x 00:08:17.827 ************************************ 00:08:17.827 START TEST env_memory 00:08:17.827 ************************************ 00:08:17.827 16:23:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:17.827 00:08:17.827 00:08:17.827 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.827 http://cunit.sourceforge.net/ 00:08:17.827 00:08:17.827 00:08:17.827 Suite: memory 00:08:17.827 Test: alloc and free memory map ...[2024-07-11 16:23:54.560433] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:17.827 passed 00:08:17.827 Test: mem map translation ...[2024-07-11 16:23:54.607948] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:17.827 [2024-07-11 16:23:54.608062] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:17.827 [2024-07-11 16:23:54.608166] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:17.827 [2024-07-11 16:23:54.608285] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:18.085 passed 00:08:18.085 Test: mem map registration ...[2024-07-11 16:23:54.693410] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:18.085 [2024-07-11 16:23:54.693510] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:18.085 passed 00:08:18.085 Test: mem map adjacent registrations ...passed 00:08:18.085 00:08:18.085 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.085 suites 1 1 n/a 0 0 00:08:18.085 tests 4 4 4 0 0 00:08:18.085 asserts 152 152 152 0 n/a 00:08:18.085 00:08:18.085 Elapsed time = 0.293 seconds 00:08:18.085 ************************************ 00:08:18.085 END TEST env_memory 00:08:18.085 ************************************ 00:08:18.085 00:08:18.085 real 0m0.325s 00:08:18.085 user 0m0.305s 00:08:18.085 sys 0m0.021s 00:08:18.085 16:23:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.085 16:23:54 -- common/autotest_common.sh@10 -- # set +x 00:08:18.085 16:23:54 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:18.085 16:23:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:18.085 16:23:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.085 16:23:54 -- common/autotest_common.sh@10 -- # set +x 00:08:18.085 ************************************ 00:08:18.085 START TEST env_vtophys 00:08:18.085 ************************************ 00:08:18.085 16:23:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:18.343 EAL: lib.eal log level changed from notice to debug 00:08:18.343 EAL: Detected lcore 0 as core 0 on socket 0 00:08:18.343 EAL: Detected lcore 1 as core 0 on socket 0 00:08:18.343 EAL: Detected lcore 2 as core 0 on socket 0 00:08:18.343 EAL: Detected lcore 3 as core 0 on socket 0 00:08:18.343 EAL: Detected lcore 4 as core 0 on socket 0 00:08:18.343 EAL: Detected lcore 5 as core 0 on socket 0 00:08:18.343 EAL: Detected lcore 6 as core 0 on socket 0 00:08:18.343 EAL: Detected lcore 7 as core 0 on socket 0 00:08:18.343 EAL: Detected lcore 8 as core 0 on socket 0 00:08:18.343 EAL: Detected lcore 9 as core 0 on socket 0 00:08:18.343 EAL: Maximum logical cores by configuration: 128 00:08:18.343 EAL: Detected CPU lcores: 10 00:08:18.343 EAL: Detected NUMA nodes: 1 00:08:18.343 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:08:18.343 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:18.343 EAL: Checking presence of .so 'librte_eal.so' 00:08:18.343 EAL: Detected static linkage of DPDK 00:08:18.343 EAL: No shared files mode enabled, IPC will be disabled 00:08:18.343 EAL: Selected IOVA mode 'PA' 00:08:18.343 EAL: Probing VFIO support... 00:08:18.343 EAL: IOMMU type 1 (Type 1) is supported 00:08:18.343 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:18.343 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:18.343 EAL: VFIO support initialized 00:08:18.343 EAL: Ask a virtual area of 0x2e000 bytes 00:08:18.343 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:18.343 EAL: Setting up physically contiguous memory... 00:08:18.343 EAL: Setting maximum number of open files to 1048576 00:08:18.343 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:18.343 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:18.343 EAL: Ask a virtual area of 0x61000 bytes 00:08:18.343 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:18.343 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:18.343 EAL: Ask a virtual area of 0x400000000 bytes 00:08:18.343 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:18.343 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:18.343 EAL: Ask a virtual area of 0x61000 bytes 00:08:18.343 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:18.343 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:18.343 EAL: Ask a virtual area of 0x400000000 bytes 00:08:18.343 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:18.343 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:18.343 EAL: Ask a virtual area of 0x61000 bytes 00:08:18.343 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:18.343 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:18.343 EAL: Ask a virtual area of 0x400000000 bytes 00:08:18.343 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:18.343 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:18.343 EAL: Ask a virtual area of 0x61000 bytes 00:08:18.343 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:18.343 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:18.343 EAL: Ask a virtual area of 0x400000000 bytes 00:08:18.343 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:18.343 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:18.343 EAL: Hugepages will be freed exactly as allocated. 00:08:18.343 EAL: No shared files mode enabled, IPC is disabled 00:08:18.343 EAL: No shared files mode enabled, IPC is disabled 00:08:18.343 EAL: TSC frequency is ~2200000 KHz 00:08:18.343 EAL: Main lcore 0 is ready (tid=7f4cfe336a40;cpuset=[0]) 00:08:18.343 EAL: Trying to obtain current memory policy. 00:08:18.343 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:18.343 EAL: Restoring previous memory policy: 0 00:08:18.343 EAL: request: mp_malloc_sync 00:08:18.343 EAL: No shared files mode enabled, IPC is disabled 00:08:18.343 EAL: Heap on socket 0 was expanded by 2MB 00:08:18.343 EAL: No shared files mode enabled, IPC is disabled 00:08:18.343 EAL: Mem event callback 'spdk:(nil)' registered 00:08:18.343 00:08:18.343 00:08:18.343 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.343 http://cunit.sourceforge.net/ 00:08:18.343 00:08:18.343 00:08:18.343 Suite: components_suite 00:08:18.911 Test: vtophys_malloc_test ...passed 00:08:18.911 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:18.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:18.911 EAL: Restoring previous memory policy: 0 00:08:18.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.911 EAL: request: mp_malloc_sync 00:08:18.911 EAL: No shared files mode enabled, IPC is disabled 00:08:18.911 EAL: Heap on socket 0 was expanded by 4MB 00:08:18.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.911 EAL: request: mp_malloc_sync 00:08:18.911 EAL: No shared files mode enabled, IPC is disabled 00:08:18.911 EAL: Heap on socket 0 was shrunk by 4MB 00:08:18.911 EAL: Trying to obtain current memory policy. 00:08:18.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:18.911 EAL: Restoring previous memory policy: 0 00:08:18.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.911 EAL: request: mp_malloc_sync 00:08:18.911 EAL: No shared files mode enabled, IPC is disabled 00:08:18.911 EAL: Heap on socket 0 was expanded by 6MB 00:08:18.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.911 EAL: request: mp_malloc_sync 00:08:18.911 EAL: No shared files mode enabled, IPC is disabled 00:08:18.911 EAL: Heap on socket 0 was shrunk by 6MB 00:08:18.911 EAL: Trying to obtain current memory policy. 00:08:18.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:18.911 EAL: Restoring previous memory policy: 0 00:08:18.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.911 EAL: request: mp_malloc_sync 00:08:18.911 EAL: No shared files mode enabled, IPC is disabled 00:08:18.911 EAL: Heap on socket 0 was expanded by 10MB 00:08:18.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.911 EAL: request: mp_malloc_sync 00:08:18.911 EAL: No shared files mode enabled, IPC is disabled 00:08:18.911 EAL: Heap on socket 0 was shrunk by 10MB 00:08:18.911 EAL: Trying to obtain current memory policy. 00:08:18.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:18.911 EAL: Restoring previous memory policy: 0 00:08:18.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.911 EAL: request: mp_malloc_sync 00:08:18.911 EAL: No shared files mode enabled, IPC is disabled 00:08:18.911 EAL: Heap on socket 0 was expanded by 18MB 00:08:18.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.911 EAL: request: mp_malloc_sync 00:08:18.911 EAL: No shared files mode enabled, IPC is disabled 00:08:18.911 EAL: Heap on socket 0 was shrunk by 18MB 00:08:18.911 EAL: Trying to obtain current memory policy. 00:08:18.911 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:18.911 EAL: Restoring previous memory policy: 0 00:08:18.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.911 EAL: request: mp_malloc_sync 00:08:18.911 EAL: No shared files mode enabled, IPC is disabled 00:08:18.911 EAL: Heap on socket 0 was expanded by 34MB 00:08:18.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:18.911 EAL: request: mp_malloc_sync 00:08:18.911 EAL: No shared files mode enabled, IPC is disabled 00:08:18.911 EAL: Heap on socket 0 was shrunk by 34MB 00:08:19.170 EAL: Trying to obtain current memory policy. 00:08:19.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:19.170 EAL: Restoring previous memory policy: 0 00:08:19.170 EAL: Calling mem event callback 'spdk:(nil)' 00:08:19.170 EAL: request: mp_malloc_sync 00:08:19.170 EAL: No shared files mode enabled, IPC is disabled 00:08:19.170 EAL: Heap on socket 0 was expanded by 66MB 00:08:19.170 EAL: Calling mem event callback 'spdk:(nil)' 00:08:19.170 EAL: request: mp_malloc_sync 00:08:19.170 EAL: No shared files mode enabled, IPC is disabled 00:08:19.170 EAL: Heap on socket 0 was shrunk by 66MB 00:08:19.170 EAL: Trying to obtain current memory policy. 00:08:19.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:19.170 EAL: Restoring previous memory policy: 0 00:08:19.170 EAL: Calling mem event callback 'spdk:(nil)' 00:08:19.170 EAL: request: mp_malloc_sync 00:08:19.170 EAL: No shared files mode enabled, IPC is disabled 00:08:19.170 EAL: Heap on socket 0 was expanded by 130MB 00:08:19.428 EAL: Calling mem event callback 'spdk:(nil)' 00:08:19.428 EAL: request: mp_malloc_sync 00:08:19.428 EAL: No shared files mode enabled, IPC is disabled 00:08:19.428 EAL: Heap on socket 0 was shrunk by 130MB 00:08:19.686 EAL: Trying to obtain current memory policy. 00:08:19.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:19.686 EAL: Restoring previous memory policy: 0 00:08:19.686 EAL: Calling mem event callback 'spdk:(nil)' 00:08:19.686 EAL: request: mp_malloc_sync 00:08:19.686 EAL: No shared files mode enabled, IPC is disabled 00:08:19.686 EAL: Heap on socket 0 was expanded by 258MB 00:08:20.251 EAL: Calling mem event callback 'spdk:(nil)' 00:08:20.251 EAL: request: mp_malloc_sync 00:08:20.251 EAL: No shared files mode enabled, IPC is disabled 00:08:20.251 EAL: Heap on socket 0 was shrunk by 258MB 00:08:20.508 EAL: Trying to obtain current memory policy. 00:08:20.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:20.766 EAL: Restoring previous memory policy: 0 00:08:20.766 EAL: Calling mem event callback 'spdk:(nil)' 00:08:20.766 EAL: request: mp_malloc_sync 00:08:20.766 EAL: No shared files mode enabled, IPC is disabled 00:08:20.766 EAL: Heap on socket 0 was expanded by 514MB 00:08:21.698 EAL: Calling mem event callback 'spdk:(nil)' 00:08:21.698 EAL: request: mp_malloc_sync 00:08:21.698 EAL: No shared files mode enabled, IPC is disabled 00:08:21.698 EAL: Heap on socket 0 was shrunk by 514MB 00:08:22.263 EAL: Trying to obtain current memory policy. 00:08:22.263 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:22.520 EAL: Restoring previous memory policy: 0 00:08:22.520 EAL: Calling mem event callback 'spdk:(nil)' 00:08:22.520 EAL: request: mp_malloc_sync 00:08:22.520 EAL: No shared files mode enabled, IPC is disabled 00:08:22.520 EAL: Heap on socket 0 was expanded by 1026MB 00:08:24.417 EAL: Calling mem event callback 'spdk:(nil)' 00:08:24.417 EAL: request: mp_malloc_sync 00:08:24.417 EAL: No shared files mode enabled, IPC is disabled 00:08:24.417 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:25.790 passed 00:08:25.790 00:08:25.790 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.790 suites 1 1 n/a 0 0 00:08:25.790 tests 2 2 2 0 0 00:08:25.790 asserts 6545 6545 6545 0 n/a 00:08:25.790 00:08:25.790 Elapsed time = 7.324 seconds 00:08:25.790 EAL: Calling mem event callback 'spdk:(nil)' 00:08:25.790 EAL: request: mp_malloc_sync 00:08:25.790 EAL: No shared files mode enabled, IPC is disabled 00:08:25.790 EAL: Heap on socket 0 was shrunk by 2MB 00:08:25.790 EAL: No shared files mode enabled, IPC is disabled 00:08:25.790 EAL: No shared files mode enabled, IPC is disabled 00:08:25.790 EAL: No shared files mode enabled, IPC is disabled 00:08:25.790 00:08:25.790 real 0m7.634s 00:08:25.790 user 0m6.524s 00:08:25.790 sys 0m0.973s 00:08:25.790 16:24:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.790 ************************************ 00:08:25.790 END TEST env_vtophys 00:08:25.790 16:24:02 -- common/autotest_common.sh@10 -- # set +x 00:08:25.790 ************************************ 00:08:25.790 16:24:02 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:25.790 16:24:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:25.790 16:24:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.790 16:24:02 -- common/autotest_common.sh@10 -- # set +x 00:08:25.790 ************************************ 00:08:25.790 START TEST env_pci 00:08:25.790 ************************************ 00:08:25.790 16:24:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:25.790 00:08:25.790 00:08:25.790 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.790 http://cunit.sourceforge.net/ 00:08:25.790 00:08:25.790 00:08:25.790 Suite: pci 00:08:25.790 Test: pci_hook ...[2024-07-11 16:24:02.590499] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 104890 has claimed it 00:08:26.048 EAL: Cannot find device (10000:00:01.0) 00:08:26.048 EAL: Failed to attach device on primary process 00:08:26.048 passed 00:08:26.048 00:08:26.048 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.048 suites 1 1 n/a 0 0 00:08:26.048 tests 1 1 1 0 0 00:08:26.048 asserts 25 25 25 0 n/a 00:08:26.048 00:08:26.048 Elapsed time = 0.006 seconds 00:08:26.048 00:08:26.048 real 0m0.084s 00:08:26.048 user 0m0.050s 00:08:26.048 sys 0m0.035s 00:08:26.048 16:24:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.048 ************************************ 00:08:26.048 END TEST env_pci 00:08:26.048 ************************************ 00:08:26.048 16:24:02 -- common/autotest_common.sh@10 -- # set +x 00:08:26.048 16:24:02 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:26.048 16:24:02 -- env/env.sh@15 -- # uname 00:08:26.048 16:24:02 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:26.048 16:24:02 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:26.048 16:24:02 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:26.048 16:24:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:26.048 16:24:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.048 16:24:02 -- common/autotest_common.sh@10 -- # set +x 00:08:26.048 ************************************ 00:08:26.048 START TEST env_dpdk_post_init 00:08:26.048 ************************************ 00:08:26.048 16:24:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:26.049 EAL: Detected CPU lcores: 10 00:08:26.049 EAL: Detected NUMA nodes: 1 00:08:26.049 EAL: Detected static linkage of DPDK 00:08:26.049 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:26.049 EAL: Selected IOVA mode 'PA' 00:08:26.049 EAL: VFIO support initialized 00:08:26.307 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:26.307 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:08:26.307 Starting DPDK initialization... 00:08:26.307 Starting SPDK post initialization... 00:08:26.307 SPDK NVMe probe 00:08:26.307 Attaching to 0000:00:06.0 00:08:26.307 Attached to 0000:00:06.0 00:08:26.307 Cleaning up... 00:08:26.307 00:08:26.307 real 0m0.271s 00:08:26.307 user 0m0.100s 00:08:26.307 sys 0m0.072s 00:08:26.307 16:24:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.307 16:24:02 -- common/autotest_common.sh@10 -- # set +x 00:08:26.307 ************************************ 00:08:26.307 END TEST env_dpdk_post_init 00:08:26.307 ************************************ 00:08:26.307 16:24:03 -- env/env.sh@26 -- # uname 00:08:26.307 16:24:03 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:26.307 16:24:03 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:26.307 16:24:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:26.307 16:24:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.307 16:24:03 -- common/autotest_common.sh@10 -- # set +x 00:08:26.307 ************************************ 00:08:26.307 START TEST env_mem_callbacks 00:08:26.307 ************************************ 00:08:26.307 16:24:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:26.307 EAL: Detected CPU lcores: 10 00:08:26.307 EAL: Detected NUMA nodes: 1 00:08:26.307 EAL: Detected static linkage of DPDK 00:08:26.307 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:26.307 EAL: Selected IOVA mode 'PA' 00:08:26.307 EAL: VFIO support initialized 00:08:26.564 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:26.564 00:08:26.564 00:08:26.564 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.564 http://cunit.sourceforge.net/ 00:08:26.564 00:08:26.564 00:08:26.564 Suite: memory 00:08:26.564 Test: test ... 00:08:26.564 register 0x200000200000 2097152 00:08:26.564 malloc 3145728 00:08:26.564 register 0x200000400000 4194304 00:08:26.564 buf 0x2000004fffc0 len 3145728 PASSED 00:08:26.564 malloc 64 00:08:26.564 buf 0x2000004ffec0 len 64 PASSED 00:08:26.564 malloc 4194304 00:08:26.564 register 0x200000800000 6291456 00:08:26.564 buf 0x2000009fffc0 len 4194304 PASSED 00:08:26.564 free 0x2000004fffc0 3145728 00:08:26.564 free 0x2000004ffec0 64 00:08:26.564 unregister 0x200000400000 4194304 PASSED 00:08:26.564 free 0x2000009fffc0 4194304 00:08:26.564 unregister 0x200000800000 6291456 PASSED 00:08:26.564 malloc 8388608 00:08:26.564 register 0x200000400000 10485760 00:08:26.564 buf 0x2000005fffc0 len 8388608 PASSED 00:08:26.564 free 0x2000005fffc0 8388608 00:08:26.564 unregister 0x200000400000 10485760 PASSED 00:08:26.564 passed 00:08:26.564 00:08:26.564 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.564 suites 1 1 n/a 0 0 00:08:26.564 tests 1 1 1 0 0 00:08:26.564 asserts 15 15 15 0 n/a 00:08:26.564 00:08:26.564 Elapsed time = 0.054 seconds 00:08:26.564 00:08:26.564 real 0m0.282s 00:08:26.564 user 0m0.113s 00:08:26.564 sys 0m0.067s 00:08:26.564 16:24:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.564 16:24:03 -- common/autotest_common.sh@10 -- # set +x 00:08:26.564 ************************************ 00:08:26.564 END TEST env_mem_callbacks 00:08:26.564 ************************************ 00:08:26.564 00:08:26.564 real 0m8.920s 00:08:26.564 user 0m7.270s 00:08:26.564 sys 0m1.292s 00:08:26.564 16:24:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.564 16:24:03 -- common/autotest_common.sh@10 -- # set +x 00:08:26.564 ************************************ 00:08:26.564 END TEST env 00:08:26.564 ************************************ 00:08:26.822 16:24:03 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:26.822 16:24:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:26.822 16:24:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.822 16:24:03 -- common/autotest_common.sh@10 -- # set +x 00:08:26.822 ************************************ 00:08:26.822 START TEST rpc 00:08:26.822 ************************************ 00:08:26.822 16:24:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:26.822 * Looking for test storage... 00:08:26.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:26.822 16:24:03 -- rpc/rpc.sh@65 -- # spdk_pid=105027 00:08:26.822 16:24:03 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:26.822 16:24:03 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:26.822 16:24:03 -- rpc/rpc.sh@67 -- # waitforlisten 105027 00:08:26.822 16:24:03 -- common/autotest_common.sh@819 -- # '[' -z 105027 ']' 00:08:26.822 16:24:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.822 16:24:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:26.822 16:24:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.822 16:24:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:26.822 16:24:03 -- common/autotest_common.sh@10 -- # set +x 00:08:26.822 [2024-07-11 16:24:03.534450] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:26.822 [2024-07-11 16:24:03.534641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105027 ] 00:08:27.079 [2024-07-11 16:24:03.688804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.079 [2024-07-11 16:24:03.848243] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:27.079 [2024-07-11 16:24:03.848496] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:27.079 [2024-07-11 16:24:03.848532] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 105027' to capture a snapshot of events at runtime. 00:08:27.079 [2024-07-11 16:24:03.848552] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid105027 for offline analysis/debug. 00:08:27.079 [2024-07-11 16:24:03.848638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.517 16:24:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:28.517 16:24:05 -- common/autotest_common.sh@852 -- # return 0 00:08:28.517 16:24:05 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:28.517 16:24:05 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:28.517 16:24:05 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:28.517 16:24:05 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:28.517 16:24:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:28.517 16:24:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.517 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:28.517 ************************************ 00:08:28.517 START TEST rpc_integrity 00:08:28.517 ************************************ 00:08:28.517 16:24:05 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:28.517 16:24:05 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:28.517 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.518 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:28.518 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.518 16:24:05 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:28.518 16:24:05 -- rpc/rpc.sh@13 -- # jq length 00:08:28.518 16:24:05 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:28.518 16:24:05 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:28.518 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.518 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:28.518 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.518 16:24:05 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:28.518 16:24:05 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:28.518 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.518 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:28.518 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.518 16:24:05 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:28.518 { 00:08:28.518 "name": "Malloc0", 00:08:28.518 "aliases": [ 00:08:28.518 "53db3e8f-e1fc-4317-9133-22d283eb129b" 00:08:28.518 ], 00:08:28.518 "product_name": "Malloc disk", 00:08:28.518 "block_size": 512, 00:08:28.518 "num_blocks": 16384, 00:08:28.518 "uuid": "53db3e8f-e1fc-4317-9133-22d283eb129b", 00:08:28.518 "assigned_rate_limits": { 00:08:28.518 "rw_ios_per_sec": 0, 00:08:28.518 "rw_mbytes_per_sec": 0, 00:08:28.518 "r_mbytes_per_sec": 0, 00:08:28.518 "w_mbytes_per_sec": 0 00:08:28.518 }, 00:08:28.518 "claimed": false, 00:08:28.518 "zoned": false, 00:08:28.518 "supported_io_types": { 00:08:28.518 "read": true, 00:08:28.518 "write": true, 00:08:28.518 "unmap": true, 00:08:28.518 "write_zeroes": true, 00:08:28.518 "flush": true, 00:08:28.518 "reset": true, 00:08:28.518 "compare": false, 00:08:28.518 "compare_and_write": false, 00:08:28.518 "abort": true, 00:08:28.518 "nvme_admin": false, 00:08:28.518 "nvme_io": false 00:08:28.518 }, 00:08:28.518 "memory_domains": [ 00:08:28.518 { 00:08:28.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.518 "dma_device_type": 2 00:08:28.518 } 00:08:28.518 ], 00:08:28.518 "driver_specific": {} 00:08:28.518 } 00:08:28.518 ]' 00:08:28.518 16:24:05 -- rpc/rpc.sh@17 -- # jq length 00:08:28.777 16:24:05 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:28.777 16:24:05 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:28.777 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.777 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:28.777 [2024-07-11 16:24:05.385234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:28.777 [2024-07-11 16:24:05.385357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.777 [2024-07-11 16:24:05.385420] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:28.777 [2024-07-11 16:24:05.385444] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.777 [2024-07-11 16:24:05.387784] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.777 [2024-07-11 16:24:05.387895] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:28.777 Passthru0 00:08:28.777 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.777 16:24:05 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:28.777 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.777 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:28.777 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.777 16:24:05 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:28.777 { 00:08:28.777 "name": "Malloc0", 00:08:28.777 "aliases": [ 00:08:28.777 "53db3e8f-e1fc-4317-9133-22d283eb129b" 00:08:28.777 ], 00:08:28.777 "product_name": "Malloc disk", 00:08:28.777 "block_size": 512, 00:08:28.777 "num_blocks": 16384, 00:08:28.777 "uuid": "53db3e8f-e1fc-4317-9133-22d283eb129b", 00:08:28.777 "assigned_rate_limits": { 00:08:28.777 "rw_ios_per_sec": 0, 00:08:28.777 "rw_mbytes_per_sec": 0, 00:08:28.777 "r_mbytes_per_sec": 0, 00:08:28.777 "w_mbytes_per_sec": 0 00:08:28.777 }, 00:08:28.777 "claimed": true, 00:08:28.777 "claim_type": "exclusive_write", 00:08:28.777 "zoned": false, 00:08:28.777 "supported_io_types": { 00:08:28.777 "read": true, 00:08:28.777 "write": true, 00:08:28.777 "unmap": true, 00:08:28.777 "write_zeroes": true, 00:08:28.777 "flush": true, 00:08:28.777 "reset": true, 00:08:28.777 "compare": false, 00:08:28.777 "compare_and_write": false, 00:08:28.777 "abort": true, 00:08:28.777 "nvme_admin": false, 00:08:28.777 "nvme_io": false 00:08:28.777 }, 00:08:28.777 "memory_domains": [ 00:08:28.777 { 00:08:28.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.777 "dma_device_type": 2 00:08:28.777 } 00:08:28.777 ], 00:08:28.777 "driver_specific": {} 00:08:28.777 }, 00:08:28.777 { 00:08:28.777 "name": "Passthru0", 00:08:28.777 "aliases": [ 00:08:28.777 "73422126-6daa-5701-8573-2dc752044c42" 00:08:28.777 ], 00:08:28.777 "product_name": "passthru", 00:08:28.777 "block_size": 512, 00:08:28.777 "num_blocks": 16384, 00:08:28.777 "uuid": "73422126-6daa-5701-8573-2dc752044c42", 00:08:28.777 "assigned_rate_limits": { 00:08:28.777 "rw_ios_per_sec": 0, 00:08:28.777 "rw_mbytes_per_sec": 0, 00:08:28.777 "r_mbytes_per_sec": 0, 00:08:28.777 "w_mbytes_per_sec": 0 00:08:28.777 }, 00:08:28.777 "claimed": false, 00:08:28.777 "zoned": false, 00:08:28.777 "supported_io_types": { 00:08:28.777 "read": true, 00:08:28.777 "write": true, 00:08:28.777 "unmap": true, 00:08:28.777 "write_zeroes": true, 00:08:28.777 "flush": true, 00:08:28.777 "reset": true, 00:08:28.777 "compare": false, 00:08:28.777 "compare_and_write": false, 00:08:28.777 "abort": true, 00:08:28.777 "nvme_admin": false, 00:08:28.777 "nvme_io": false 00:08:28.777 }, 00:08:28.777 "memory_domains": [ 00:08:28.777 { 00:08:28.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.777 "dma_device_type": 2 00:08:28.777 } 00:08:28.777 ], 00:08:28.777 "driver_specific": { 00:08:28.777 "passthru": { 00:08:28.777 "name": "Passthru0", 00:08:28.777 "base_bdev_name": "Malloc0" 00:08:28.777 } 00:08:28.777 } 00:08:28.777 } 00:08:28.777 ]' 00:08:28.777 16:24:05 -- rpc/rpc.sh@21 -- # jq length 00:08:28.777 16:24:05 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:28.777 16:24:05 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:28.777 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.777 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:28.777 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.777 16:24:05 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:28.777 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.778 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:28.778 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.778 16:24:05 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:28.778 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.778 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:28.778 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.778 16:24:05 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:28.778 16:24:05 -- rpc/rpc.sh@26 -- # jq length 00:08:28.778 16:24:05 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:28.778 00:08:28.778 real 0m0.354s 00:08:28.778 user 0m0.245s 00:08:28.778 sys 0m0.022s 00:08:28.778 16:24:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.778 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:28.778 ************************************ 00:08:28.778 END TEST rpc_integrity 00:08:28.778 ************************************ 00:08:29.036 16:24:05 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:29.036 16:24:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:29.036 16:24:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.036 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:29.036 ************************************ 00:08:29.036 START TEST rpc_plugins 00:08:29.036 ************************************ 00:08:29.036 16:24:05 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:08:29.036 16:24:05 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:29.036 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.036 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:29.036 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.036 16:24:05 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:29.036 16:24:05 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:29.036 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.036 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:29.036 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.036 16:24:05 -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:29.036 { 00:08:29.036 "name": "Malloc1", 00:08:29.036 "aliases": [ 00:08:29.036 "3381e748-de07-4bea-bd1e-9857c096cd67" 00:08:29.036 ], 00:08:29.036 "product_name": "Malloc disk", 00:08:29.036 "block_size": 4096, 00:08:29.036 "num_blocks": 256, 00:08:29.036 "uuid": "3381e748-de07-4bea-bd1e-9857c096cd67", 00:08:29.036 "assigned_rate_limits": { 00:08:29.036 "rw_ios_per_sec": 0, 00:08:29.036 "rw_mbytes_per_sec": 0, 00:08:29.036 "r_mbytes_per_sec": 0, 00:08:29.036 "w_mbytes_per_sec": 0 00:08:29.036 }, 00:08:29.036 "claimed": false, 00:08:29.036 "zoned": false, 00:08:29.036 "supported_io_types": { 00:08:29.036 "read": true, 00:08:29.036 "write": true, 00:08:29.036 "unmap": true, 00:08:29.036 "write_zeroes": true, 00:08:29.036 "flush": true, 00:08:29.036 "reset": true, 00:08:29.036 "compare": false, 00:08:29.036 "compare_and_write": false, 00:08:29.036 "abort": true, 00:08:29.036 "nvme_admin": false, 00:08:29.036 "nvme_io": false 00:08:29.036 }, 00:08:29.036 "memory_domains": [ 00:08:29.036 { 00:08:29.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.036 "dma_device_type": 2 00:08:29.036 } 00:08:29.036 ], 00:08:29.036 "driver_specific": {} 00:08:29.036 } 00:08:29.036 ]' 00:08:29.036 16:24:05 -- rpc/rpc.sh@32 -- # jq length 00:08:29.036 16:24:05 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:29.036 16:24:05 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:29.036 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.036 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:29.036 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.036 16:24:05 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:29.036 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.036 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:29.036 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.036 16:24:05 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:29.036 16:24:05 -- rpc/rpc.sh@36 -- # jq length 00:08:29.036 16:24:05 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:29.036 00:08:29.037 real 0m0.154s 00:08:29.037 user 0m0.107s 00:08:29.037 sys 0m0.012s 00:08:29.037 16:24:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.037 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:29.037 ************************************ 00:08:29.037 END TEST rpc_plugins 00:08:29.037 ************************************ 00:08:29.037 16:24:05 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:29.037 16:24:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:29.037 16:24:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.037 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:29.037 ************************************ 00:08:29.037 START TEST rpc_trace_cmd_test 00:08:29.037 ************************************ 00:08:29.037 16:24:05 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:08:29.037 16:24:05 -- rpc/rpc.sh@40 -- # local info 00:08:29.037 16:24:05 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:29.037 16:24:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.037 16:24:05 -- common/autotest_common.sh@10 -- # set +x 00:08:29.295 16:24:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.295 16:24:05 -- rpc/rpc.sh@42 -- # info='{ 00:08:29.295 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid105027", 00:08:29.295 "tpoint_group_mask": "0x8", 00:08:29.295 "iscsi_conn": { 00:08:29.295 "mask": "0x2", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 }, 00:08:29.295 "scsi": { 00:08:29.295 "mask": "0x4", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 }, 00:08:29.295 "bdev": { 00:08:29.295 "mask": "0x8", 00:08:29.295 "tpoint_mask": "0xffffffffffffffff" 00:08:29.295 }, 00:08:29.295 "nvmf_rdma": { 00:08:29.295 "mask": "0x10", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 }, 00:08:29.295 "nvmf_tcp": { 00:08:29.295 "mask": "0x20", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 }, 00:08:29.295 "ftl": { 00:08:29.295 "mask": "0x40", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 }, 00:08:29.295 "blobfs": { 00:08:29.295 "mask": "0x80", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 }, 00:08:29.295 "dsa": { 00:08:29.295 "mask": "0x200", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 }, 00:08:29.295 "thread": { 00:08:29.295 "mask": "0x400", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 }, 00:08:29.295 "nvme_pcie": { 00:08:29.295 "mask": "0x800", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 }, 00:08:29.295 "iaa": { 00:08:29.295 "mask": "0x1000", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 }, 00:08:29.295 "nvme_tcp": { 00:08:29.295 "mask": "0x2000", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 }, 00:08:29.295 "bdev_nvme": { 00:08:29.295 "mask": "0x4000", 00:08:29.295 "tpoint_mask": "0x0" 00:08:29.295 } 00:08:29.295 }' 00:08:29.295 16:24:05 -- rpc/rpc.sh@43 -- # jq length 00:08:29.295 16:24:05 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:08:29.295 16:24:05 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:29.295 16:24:05 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:29.295 16:24:05 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:29.295 16:24:06 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:29.295 16:24:06 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:29.295 16:24:06 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:29.295 16:24:06 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:29.553 ************************************ 00:08:29.553 END TEST rpc_trace_cmd_test 00:08:29.553 ************************************ 00:08:29.553 16:24:06 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:29.553 00:08:29.553 real 0m0.322s 00:08:29.553 user 0m0.291s 00:08:29.553 sys 0m0.025s 00:08:29.553 16:24:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.553 16:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.553 16:24:06 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:29.553 16:24:06 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:29.553 16:24:06 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:29.553 16:24:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:29.553 16:24:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.553 16:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.553 ************************************ 00:08:29.553 START TEST rpc_daemon_integrity 00:08:29.553 ************************************ 00:08:29.553 16:24:06 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:08:29.553 16:24:06 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:29.553 16:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.553 16:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.553 16:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.553 16:24:06 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:29.553 16:24:06 -- rpc/rpc.sh@13 -- # jq length 00:08:29.553 16:24:06 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:29.553 16:24:06 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:29.553 16:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.553 16:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.553 16:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.553 16:24:06 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:29.553 16:24:06 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:29.553 16:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.553 16:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.553 16:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.553 16:24:06 -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:29.553 { 00:08:29.553 "name": "Malloc2", 00:08:29.553 "aliases": [ 00:08:29.554 "0b426244-0dd1-4305-a0a3-6b5ead831267" 00:08:29.554 ], 00:08:29.554 "product_name": "Malloc disk", 00:08:29.554 "block_size": 512, 00:08:29.554 "num_blocks": 16384, 00:08:29.554 "uuid": "0b426244-0dd1-4305-a0a3-6b5ead831267", 00:08:29.554 "assigned_rate_limits": { 00:08:29.554 "rw_ios_per_sec": 0, 00:08:29.554 "rw_mbytes_per_sec": 0, 00:08:29.554 "r_mbytes_per_sec": 0, 00:08:29.554 "w_mbytes_per_sec": 0 00:08:29.554 }, 00:08:29.554 "claimed": false, 00:08:29.554 "zoned": false, 00:08:29.554 "supported_io_types": { 00:08:29.554 "read": true, 00:08:29.554 "write": true, 00:08:29.554 "unmap": true, 00:08:29.554 "write_zeroes": true, 00:08:29.554 "flush": true, 00:08:29.554 "reset": true, 00:08:29.554 "compare": false, 00:08:29.554 "compare_and_write": false, 00:08:29.554 "abort": true, 00:08:29.554 "nvme_admin": false, 00:08:29.554 "nvme_io": false 00:08:29.554 }, 00:08:29.554 "memory_domains": [ 00:08:29.554 { 00:08:29.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.554 "dma_device_type": 2 00:08:29.554 } 00:08:29.554 ], 00:08:29.554 "driver_specific": {} 00:08:29.554 } 00:08:29.554 ]' 00:08:29.554 16:24:06 -- rpc/rpc.sh@17 -- # jq length 00:08:29.812 16:24:06 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:29.812 16:24:06 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:29.812 16:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.812 16:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.812 [2024-07-11 16:24:06.370282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:29.812 [2024-07-11 16:24:06.370390] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.812 [2024-07-11 16:24:06.370455] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:29.812 [2024-07-11 16:24:06.370478] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.812 [2024-07-11 16:24:06.372907] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.812 [2024-07-11 16:24:06.373025] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:29.812 Passthru0 00:08:29.812 16:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.812 16:24:06 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:29.812 16:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.812 16:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.812 16:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.812 16:24:06 -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:29.812 { 00:08:29.812 "name": "Malloc2", 00:08:29.812 "aliases": [ 00:08:29.812 "0b426244-0dd1-4305-a0a3-6b5ead831267" 00:08:29.812 ], 00:08:29.812 "product_name": "Malloc disk", 00:08:29.812 "block_size": 512, 00:08:29.812 "num_blocks": 16384, 00:08:29.812 "uuid": "0b426244-0dd1-4305-a0a3-6b5ead831267", 00:08:29.812 "assigned_rate_limits": { 00:08:29.812 "rw_ios_per_sec": 0, 00:08:29.812 "rw_mbytes_per_sec": 0, 00:08:29.812 "r_mbytes_per_sec": 0, 00:08:29.812 "w_mbytes_per_sec": 0 00:08:29.812 }, 00:08:29.812 "claimed": true, 00:08:29.812 "claim_type": "exclusive_write", 00:08:29.812 "zoned": false, 00:08:29.812 "supported_io_types": { 00:08:29.812 "read": true, 00:08:29.812 "write": true, 00:08:29.812 "unmap": true, 00:08:29.812 "write_zeroes": true, 00:08:29.812 "flush": true, 00:08:29.812 "reset": true, 00:08:29.812 "compare": false, 00:08:29.812 "compare_and_write": false, 00:08:29.812 "abort": true, 00:08:29.812 "nvme_admin": false, 00:08:29.812 "nvme_io": false 00:08:29.812 }, 00:08:29.812 "memory_domains": [ 00:08:29.812 { 00:08:29.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.812 "dma_device_type": 2 00:08:29.812 } 00:08:29.812 ], 00:08:29.812 "driver_specific": {} 00:08:29.812 }, 00:08:29.812 { 00:08:29.812 "name": "Passthru0", 00:08:29.812 "aliases": [ 00:08:29.812 "00015245-d54e-5f78-84ef-f83d2cbe57bd" 00:08:29.812 ], 00:08:29.812 "product_name": "passthru", 00:08:29.812 "block_size": 512, 00:08:29.812 "num_blocks": 16384, 00:08:29.812 "uuid": "00015245-d54e-5f78-84ef-f83d2cbe57bd", 00:08:29.813 "assigned_rate_limits": { 00:08:29.813 "rw_ios_per_sec": 0, 00:08:29.813 "rw_mbytes_per_sec": 0, 00:08:29.813 "r_mbytes_per_sec": 0, 00:08:29.813 "w_mbytes_per_sec": 0 00:08:29.813 }, 00:08:29.813 "claimed": false, 00:08:29.813 "zoned": false, 00:08:29.813 "supported_io_types": { 00:08:29.813 "read": true, 00:08:29.813 "write": true, 00:08:29.813 "unmap": true, 00:08:29.813 "write_zeroes": true, 00:08:29.813 "flush": true, 00:08:29.813 "reset": true, 00:08:29.813 "compare": false, 00:08:29.813 "compare_and_write": false, 00:08:29.813 "abort": true, 00:08:29.813 "nvme_admin": false, 00:08:29.813 "nvme_io": false 00:08:29.813 }, 00:08:29.813 "memory_domains": [ 00:08:29.813 { 00:08:29.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.813 "dma_device_type": 2 00:08:29.813 } 00:08:29.813 ], 00:08:29.813 "driver_specific": { 00:08:29.813 "passthru": { 00:08:29.813 "name": "Passthru0", 00:08:29.813 "base_bdev_name": "Malloc2" 00:08:29.813 } 00:08:29.813 } 00:08:29.813 } 00:08:29.813 ]' 00:08:29.813 16:24:06 -- rpc/rpc.sh@21 -- # jq length 00:08:29.813 16:24:06 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:29.813 16:24:06 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:29.813 16:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.813 16:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.813 16:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.813 16:24:06 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:29.813 16:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.813 16:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.813 16:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.813 16:24:06 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:29.813 16:24:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:29.813 16:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.813 16:24:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:29.813 16:24:06 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:29.813 16:24:06 -- rpc/rpc.sh@26 -- # jq length 00:08:29.813 ************************************ 00:08:29.813 END TEST rpc_daemon_integrity 00:08:29.813 ************************************ 00:08:29.813 16:24:06 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:29.813 00:08:29.813 real 0m0.350s 00:08:29.813 user 0m0.243s 00:08:29.813 sys 0m0.021s 00:08:29.813 16:24:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.813 16:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:29.813 16:24:06 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:29.813 16:24:06 -- rpc/rpc.sh@84 -- # killprocess 105027 00:08:29.813 16:24:06 -- common/autotest_common.sh@926 -- # '[' -z 105027 ']' 00:08:29.813 16:24:06 -- common/autotest_common.sh@930 -- # kill -0 105027 00:08:29.813 16:24:06 -- common/autotest_common.sh@931 -- # uname 00:08:29.813 16:24:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:29.813 16:24:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105027 00:08:29.813 killing process with pid 105027 00:08:29.813 16:24:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:29.813 16:24:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:29.813 16:24:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105027' 00:08:29.813 16:24:06 -- common/autotest_common.sh@945 -- # kill 105027 00:08:29.813 16:24:06 -- common/autotest_common.sh@950 -- # wait 105027 00:08:32.341 00:08:32.341 real 0m5.152s 00:08:32.341 user 0m6.276s 00:08:32.341 sys 0m0.671s 00:08:32.341 16:24:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.341 ************************************ 00:08:32.341 END TEST rpc 00:08:32.341 ************************************ 00:08:32.341 16:24:08 -- common/autotest_common.sh@10 -- # set +x 00:08:32.341 16:24:08 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:32.341 16:24:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.341 16:24:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.341 16:24:08 -- common/autotest_common.sh@10 -- # set +x 00:08:32.341 ************************************ 00:08:32.341 START TEST rpc_client 00:08:32.341 ************************************ 00:08:32.341 16:24:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:32.341 * Looking for test storage... 00:08:32.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:32.342 16:24:08 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:32.342 OK 00:08:32.342 16:24:08 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:32.342 00:08:32.342 real 0m0.140s 00:08:32.342 user 0m0.099s 00:08:32.342 sys 0m0.051s 00:08:32.342 16:24:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.342 16:24:08 -- common/autotest_common.sh@10 -- # set +x 00:08:32.342 ************************************ 00:08:32.342 END TEST rpc_client 00:08:32.342 ************************************ 00:08:32.342 16:24:08 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:32.342 16:24:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:32.342 16:24:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.342 16:24:08 -- common/autotest_common.sh@10 -- # set +x 00:08:32.342 ************************************ 00:08:32.342 START TEST json_config 00:08:32.342 ************************************ 00:08:32.342 16:24:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:32.342 16:24:08 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:32.342 16:24:08 -- nvmf/common.sh@7 -- # uname -s 00:08:32.342 16:24:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.342 16:24:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.342 16:24:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.342 16:24:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.342 16:24:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.342 16:24:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.342 16:24:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.342 16:24:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.342 16:24:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.342 16:24:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.342 16:24:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:32e5fb14-a0b2-430a-82ac-919ba6b76b2f 00:08:32.342 16:24:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=32e5fb14-a0b2-430a-82ac-919ba6b76b2f 00:08:32.342 16:24:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.342 16:24:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.342 16:24:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:32.342 16:24:08 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.342 16:24:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.342 16:24:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.342 16:24:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.342 16:24:08 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:32.342 16:24:08 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:32.342 16:24:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:32.342 16:24:08 -- paths/export.sh@5 -- # export PATH 00:08:32.342 16:24:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:32.342 16:24:08 -- nvmf/common.sh@46 -- # : 0 00:08:32.342 16:24:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:32.342 16:24:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:32.342 16:24:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:32.342 16:24:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.342 16:24:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.342 16:24:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:32.342 16:24:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:32.342 16:24:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:32.342 16:24:08 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:08:32.342 16:24:08 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:08:32.342 16:24:08 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:08:32.342 16:24:08 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:32.342 16:24:08 -- json_config/json_config.sh@30 -- # app_pid=([target]="" [initiator]="") 00:08:32.342 16:24:08 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:08:32.342 16:24:08 -- json_config/json_config.sh@31 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:08:32.342 16:24:08 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:08:32.342 16:24:08 -- json_config/json_config.sh@32 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:08:32.342 16:24:08 -- json_config/json_config.sh@32 -- # declare -A app_params 00:08:32.342 16:24:08 -- json_config/json_config.sh@33 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:08:32.342 16:24:08 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:08:32.342 16:24:08 -- json_config/json_config.sh@43 -- # last_event_id=0 00:08:32.342 INFO: JSON configuration test init 00:08:32.342 16:24:08 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:32.342 16:24:08 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:08:32.342 16:24:08 -- json_config/json_config.sh@420 -- # json_config_test_init 00:08:32.342 16:24:08 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:08:32.342 16:24:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:32.342 16:24:08 -- common/autotest_common.sh@10 -- # set +x 00:08:32.342 16:24:08 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:08:32.342 16:24:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:32.342 16:24:08 -- common/autotest_common.sh@10 -- # set +x 00:08:32.342 16:24:08 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:08:32.342 16:24:08 -- json_config/json_config.sh@98 -- # local app=target 00:08:32.342 16:24:08 -- json_config/json_config.sh@99 -- # shift 00:08:32.342 16:24:08 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:32.342 16:24:08 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:32.342 16:24:08 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:32.342 16:24:08 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:32.342 16:24:08 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:32.342 16:24:08 -- json_config/json_config.sh@111 -- # app_pid[$app]=105323 00:08:32.342 16:24:08 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:32.342 Waiting for target to run... 00:08:32.342 16:24:08 -- json_config/json_config.sh@114 -- # waitforlisten 105323 /var/tmp/spdk_tgt.sock 00:08:32.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:32.342 16:24:08 -- common/autotest_common.sh@819 -- # '[' -z 105323 ']' 00:08:32.342 16:24:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:32.342 16:24:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:32.342 16:24:08 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:32.342 16:24:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:32.342 16:24:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:32.342 16:24:08 -- common/autotest_common.sh@10 -- # set +x 00:08:32.342 [2024-07-11 16:24:08.924462] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:32.342 [2024-07-11 16:24:08.924692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105323 ] 00:08:32.908 [2024-07-11 16:24:09.410328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.908 [2024-07-11 16:24:09.613091] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:32.908 [2024-07-11 16:24:09.613325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.167 16:24:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:33.167 16:24:09 -- common/autotest_common.sh@852 -- # return 0 00:08:33.167 00:08:33.167 16:24:09 -- json_config/json_config.sh@115 -- # echo '' 00:08:33.167 16:24:09 -- json_config/json_config.sh@322 -- # create_accel_config 00:08:33.167 16:24:09 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:08:33.167 16:24:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:33.167 16:24:09 -- common/autotest_common.sh@10 -- # set +x 00:08:33.167 16:24:09 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:08:33.167 16:24:09 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:08:33.167 16:24:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:33.167 16:24:09 -- common/autotest_common.sh@10 -- # set +x 00:08:33.167 16:24:09 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:33.167 16:24:09 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:08:33.167 16:24:09 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:34.102 16:24:10 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:08:34.102 16:24:10 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:08:34.102 16:24:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:34.102 16:24:10 -- common/autotest_common.sh@10 -- # set +x 00:08:34.102 16:24:10 -- json_config/json_config.sh@48 -- # local ret=0 00:08:34.102 16:24:10 -- json_config/json_config.sh@49 -- # enabled_types=("bdev_register" "bdev_unregister") 00:08:34.102 16:24:10 -- json_config/json_config.sh@49 -- # local enabled_types 00:08:34.102 16:24:10 -- json_config/json_config.sh@51 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:08:34.102 16:24:10 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:34.102 16:24:10 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:34.102 16:24:10 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:34.360 16:24:11 -- json_config/json_config.sh@51 -- # local get_types 00:08:34.360 16:24:11 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:08:34.360 16:24:11 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:08:34.360 16:24:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:34.360 16:24:11 -- common/autotest_common.sh@10 -- # set +x 00:08:34.619 16:24:11 -- json_config/json_config.sh@58 -- # return 0 00:08:34.619 16:24:11 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:08:34.619 16:24:11 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:08:34.619 16:24:11 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:08:34.619 16:24:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:34.619 16:24:11 -- common/autotest_common.sh@10 -- # set +x 00:08:34.619 16:24:11 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:08:34.619 16:24:11 -- json_config/json_config.sh@160 -- # local expected_notifications 00:08:34.619 16:24:11 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:08:34.619 16:24:11 -- json_config/json_config.sh@164 -- # get_notifications 00:08:34.619 16:24:11 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:34.619 16:24:11 -- json_config/json_config.sh@64 -- # IFS=: 00:08:34.619 16:24:11 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:34.619 16:24:11 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:34.619 16:24:11 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:34.619 16:24:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:34.878 16:24:11 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:34.878 16:24:11 -- json_config/json_config.sh@64 -- # IFS=: 00:08:34.878 16:24:11 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:34.878 16:24:11 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:08:34.878 16:24:11 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:08:34.878 16:24:11 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:08:34.878 16:24:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:08:34.878 Nvme0n1p0 Nvme0n1p1 00:08:35.137 16:24:11 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:08:35.137 16:24:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:08:35.137 [2024-07-11 16:24:11.940187] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:35.137 [2024-07-11 16:24:11.940338] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:35.137 00:08:35.395 16:24:11 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:08:35.395 16:24:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:08:35.395 Malloc3 00:08:35.395 16:24:12 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:35.395 16:24:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:08:35.654 [2024-07-11 16:24:12.409114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:35.654 [2024-07-11 16:24:12.409191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.654 [2024-07-11 16:24:12.409239] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:35.654 [2024-07-11 16:24:12.409281] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.655 [2024-07-11 16:24:12.411369] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.655 [2024-07-11 16:24:12.411423] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:35.655 PTBdevFromMalloc3 00:08:35.655 16:24:12 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:08:35.655 16:24:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:08:35.913 Null0 00:08:35.913 16:24:12 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:08:35.913 16:24:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:08:36.172 Malloc0 00:08:36.172 16:24:12 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:08:36.172 16:24:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:08:36.429 Malloc1 00:08:36.429 16:24:13 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:08:36.429 16:24:13 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:08:36.686 102400+0 records in 00:08:36.686 102400+0 records out 00:08:36.686 104857600 bytes (105 MB, 100 MiB) copied, 0.282754 s, 371 MB/s 00:08:36.686 16:24:13 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:08:36.686 16:24:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:08:36.943 aio_disk 00:08:36.943 16:24:13 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:08:36.943 16:24:13 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:36.943 16:24:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:08:37.201 c097409e-c7ef-4ce9-bc04-cee2c7add03c 00:08:37.201 16:24:13 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:08:37.201 16:24:13 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:08:37.201 16:24:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:08:37.460 16:24:14 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:08:37.460 16:24:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:08:37.718 16:24:14 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:37.718 16:24:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:08:37.977 16:24:14 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:37.977 16:24:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:08:38.238 16:24:14 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:08:38.238 16:24:14 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:08:38.238 16:24:14 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:6aec6d4d-8899-41ce-b71c-6815c5c9400f bdev_register:153c6e9e-f85b-44cd-bced-94286396f078 bdev_register:2d0a1e1c-7e48-4f76-9ced-7a0f0c33267d bdev_register:f3a98081-0f5b-4fcc-904f-801dfe53ae14 00:08:38.238 16:24:14 -- json_config/json_config.sh@70 -- # local events_to_check 00:08:38.238 16:24:14 -- json_config/json_config.sh@71 -- # local recorded_events 00:08:38.238 16:24:14 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:08:38.238 16:24:14 -- json_config/json_config.sh@74 -- # sort 00:08:38.238 16:24:14 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:6aec6d4d-8899-41ce-b71c-6815c5c9400f bdev_register:153c6e9e-f85b-44cd-bced-94286396f078 bdev_register:2d0a1e1c-7e48-4f76-9ced-7a0f0c33267d bdev_register:f3a98081-0f5b-4fcc-904f-801dfe53ae14 00:08:38.238 16:24:14 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:08:38.238 16:24:14 -- json_config/json_config.sh@75 -- # get_notifications 00:08:38.238 16:24:14 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:08:38.238 16:24:14 -- json_config/json_config.sh@75 -- # sort 00:08:38.238 16:24:14 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.238 16:24:14 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.238 16:24:14 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:08:38.238 16:24:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:08:38.238 16:24:14 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:6aec6d4d-8899-41ce-b71c-6815c5c9400f 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:153c6e9e-f85b-44cd-bced-94286396f078 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:2d0a1e1c-7e48-4f76-9ced-7a0f0c33267d 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@65 -- # echo bdev_register:f3a98081-0f5b-4fcc-904f-801dfe53ae14 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # IFS=: 00:08:38.497 16:24:15 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:08:38.497 16:24:15 -- json_config/json_config.sh@77 -- # [[ bdev_register:153c6e9e-f85b-44cd-bced-94286396f078 bdev_register:2d0a1e1c-7e48-4f76-9ced-7a0f0c33267d bdev_register:6aec6d4d-8899-41ce-b71c-6815c5c9400f bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:f3a98081-0f5b-4fcc-904f-801dfe53ae14 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\5\3\c\6\e\9\e\-\f\8\5\b\-\4\4\c\d\-\b\c\e\d\-\9\4\2\8\6\3\9\6\f\0\7\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\d\0\a\1\e\1\c\-\7\e\4\8\-\4\f\7\6\-\9\c\e\d\-\7\a\0\f\0\c\3\3\2\6\7\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\a\e\c\6\d\4\d\-\8\8\9\9\-\4\1\c\e\-\b\7\1\c\-\6\8\1\5\c\5\c\9\4\0\0\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\3\a\9\8\0\8\1\-\0\f\5\b\-\4\f\c\c\-\9\0\4\f\-\8\0\1\d\f\e\5\3\a\e\1\4 ]] 00:08:38.497 16:24:15 -- json_config/json_config.sh@89 -- # cat 00:08:38.497 16:24:15 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:153c6e9e-f85b-44cd-bced-94286396f078 bdev_register:2d0a1e1c-7e48-4f76-9ced-7a0f0c33267d bdev_register:6aec6d4d-8899-41ce-b71c-6815c5c9400f bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:f3a98081-0f5b-4fcc-904f-801dfe53ae14 00:08:38.497 Expected events matched: 00:08:38.497 bdev_register:153c6e9e-f85b-44cd-bced-94286396f078 00:08:38.497 bdev_register:2d0a1e1c-7e48-4f76-9ced-7a0f0c33267d 00:08:38.497 bdev_register:6aec6d4d-8899-41ce-b71c-6815c5c9400f 00:08:38.497 bdev_register:Malloc0 00:08:38.497 bdev_register:Malloc0p0 00:08:38.497 bdev_register:Malloc0p1 00:08:38.498 bdev_register:Malloc0p2 00:08:38.498 bdev_register:Malloc1 00:08:38.498 bdev_register:Malloc3 00:08:38.498 bdev_register:Null0 00:08:38.498 bdev_register:Nvme0n1 00:08:38.498 bdev_register:Nvme0n1p0 00:08:38.498 bdev_register:Nvme0n1p1 00:08:38.498 bdev_register:PTBdevFromMalloc3 00:08:38.498 bdev_register:aio_disk 00:08:38.498 bdev_register:f3a98081-0f5b-4fcc-904f-801dfe53ae14 00:08:38.498 16:24:15 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:08:38.498 16:24:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:38.498 16:24:15 -- common/autotest_common.sh@10 -- # set +x 00:08:38.498 16:24:15 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:08:38.498 16:24:15 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:08:38.498 16:24:15 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:08:38.498 16:24:15 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:08:38.498 16:24:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:38.498 16:24:15 -- common/autotest_common.sh@10 -- # set +x 00:08:38.756 16:24:15 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:08:38.756 16:24:15 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:38.756 16:24:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:39.015 MallocBdevForConfigChangeCheck 00:08:39.015 16:24:15 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:08:39.015 16:24:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:39.015 16:24:15 -- common/autotest_common.sh@10 -- # set +x 00:08:39.015 16:24:15 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:08:39.015 16:24:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:39.274 INFO: shutting down applications... 00:08:39.274 16:24:16 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:08:39.274 16:24:16 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:08:39.274 16:24:16 -- json_config/json_config.sh@431 -- # json_config_clear target 00:08:39.274 16:24:16 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:08:39.274 16:24:16 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:39.533 [2024-07-11 16:24:16.218998] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:08:39.791 Calling clear_vhost_scsi_subsystem 00:08:39.791 Calling clear_iscsi_subsystem 00:08:39.791 Calling clear_vhost_blk_subsystem 00:08:39.791 Calling clear_nbd_subsystem 00:08:39.791 Calling clear_nvmf_subsystem 00:08:39.791 Calling clear_bdev_subsystem 00:08:39.791 Calling clear_accel_subsystem 00:08:39.791 Calling clear_iobuf_subsystem 00:08:39.791 Calling clear_sock_subsystem 00:08:39.791 Calling clear_vmd_subsystem 00:08:39.791 Calling clear_scheduler_subsystem 00:08:39.791 16:24:16 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:39.791 16:24:16 -- json_config/json_config.sh@396 -- # count=100 00:08:39.791 16:24:16 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:08:39.791 16:24:16 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:39.791 16:24:16 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:39.791 16:24:16 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:40.050 16:24:16 -- json_config/json_config.sh@398 -- # break 00:08:40.050 16:24:16 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:08:40.050 16:24:16 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:08:40.050 16:24:16 -- json_config/json_config.sh@120 -- # local app=target 00:08:40.050 16:24:16 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:08:40.050 16:24:16 -- json_config/json_config.sh@124 -- # [[ -n 105323 ]] 00:08:40.050 16:24:16 -- json_config/json_config.sh@127 -- # kill -SIGINT 105323 00:08:40.050 16:24:16 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:08:40.050 16:24:16 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:40.050 16:24:16 -- json_config/json_config.sh@130 -- # kill -0 105323 00:08:40.050 16:24:16 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:40.618 16:24:17 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:40.618 16:24:17 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:40.618 16:24:17 -- json_config/json_config.sh@130 -- # kill -0 105323 00:08:40.618 16:24:17 -- json_config/json_config.sh@134 -- # sleep 0.5 00:08:41.185 16:24:17 -- json_config/json_config.sh@129 -- # (( i++ )) 00:08:41.185 16:24:17 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:08:41.185 16:24:17 -- json_config/json_config.sh@130 -- # kill -0 105323 00:08:41.185 16:24:17 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:08:41.185 16:24:17 -- json_config/json_config.sh@132 -- # break 00:08:41.185 16:24:17 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:08:41.185 SPDK target shutdown done 00:08:41.185 16:24:17 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:08:41.185 INFO: relaunching applications... 00:08:41.185 16:24:17 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:08:41.186 16:24:17 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:41.186 16:24:17 -- json_config/json_config.sh@98 -- # local app=target 00:08:41.186 16:24:17 -- json_config/json_config.sh@99 -- # shift 00:08:41.186 16:24:17 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:08:41.186 16:24:17 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:08:41.186 16:24:17 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:08:41.186 16:24:17 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:41.186 16:24:17 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:08:41.186 16:24:17 -- json_config/json_config.sh@111 -- # app_pid[$app]=105607 00:08:41.186 16:24:17 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:08:41.186 Waiting for target to run... 00:08:41.186 16:24:17 -- json_config/json_config.sh@114 -- # waitforlisten 105607 /var/tmp/spdk_tgt.sock 00:08:41.186 16:24:17 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:41.186 16:24:17 -- common/autotest_common.sh@819 -- # '[' -z 105607 ']' 00:08:41.186 16:24:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:41.186 16:24:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:41.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:41.186 16:24:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:41.186 16:24:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:41.186 16:24:17 -- common/autotest_common.sh@10 -- # set +x 00:08:41.186 [2024-07-11 16:24:17.874135] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:41.186 [2024-07-11 16:24:17.874926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105607 ] 00:08:41.752 [2024-07-11 16:24:18.327541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.752 [2024-07-11 16:24:18.507749] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:41.752 [2024-07-11 16:24:18.508002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.684 [2024-07-11 16:24:19.153345] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:42.684 [2024-07-11 16:24:19.153505] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:42.684 [2024-07-11 16:24:19.161336] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:42.684 [2024-07-11 16:24:19.161432] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:42.684 [2024-07-11 16:24:19.169363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:42.684 [2024-07-11 16:24:19.169440] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:42.684 [2024-07-11 16:24:19.169484] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:42.684 [2024-07-11 16:24:19.261609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:42.684 [2024-07-11 16:24:19.261690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.684 [2024-07-11 16:24:19.261731] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:42.684 [2024-07-11 16:24:19.261758] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.684 [2024-07-11 16:24:19.262281] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.684 [2024-07-11 16:24:19.262335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:42.942 16:24:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:42.942 00:08:42.942 16:24:19 -- common/autotest_common.sh@852 -- # return 0 00:08:42.942 16:24:19 -- json_config/json_config.sh@115 -- # echo '' 00:08:42.942 16:24:19 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:08:42.942 INFO: Checking if target configuration is the same... 00:08:42.942 16:24:19 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:42.942 16:24:19 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:42.942 16:24:19 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:08:42.942 16:24:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:42.942 + '[' 2 -ne 2 ']' 00:08:42.942 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:42.942 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:42.942 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:42.942 +++ basename /dev/fd/62 00:08:42.942 ++ mktemp /tmp/62.XXX 00:08:42.942 + tmp_file_1=/tmp/62.cIn 00:08:42.942 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:42.942 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:42.942 + tmp_file_2=/tmp/spdk_tgt_config.json.i8W 00:08:42.942 + ret=0 00:08:42.942 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:43.199 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:43.199 + diff -u /tmp/62.cIn /tmp/spdk_tgt_config.json.i8W 00:08:43.199 + echo 'INFO: JSON config files are the same' 00:08:43.199 INFO: JSON config files are the same 00:08:43.199 + rm /tmp/62.cIn /tmp/spdk_tgt_config.json.i8W 00:08:43.199 + exit 0 00:08:43.199 16:24:19 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:08:43.199 16:24:19 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:43.199 INFO: changing configuration and checking if this can be detected... 00:08:43.199 16:24:19 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:43.199 16:24:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:43.457 16:24:20 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:43.457 16:24:20 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:08:43.457 16:24:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:43.457 + '[' 2 -ne 2 ']' 00:08:43.457 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:43.457 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:43.457 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:43.457 +++ basename /dev/fd/62 00:08:43.457 ++ mktemp /tmp/62.XXX 00:08:43.457 + tmp_file_1=/tmp/62.b9Q 00:08:43.457 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:43.457 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:43.457 + tmp_file_2=/tmp/spdk_tgt_config.json.kpL 00:08:43.457 + ret=0 00:08:43.457 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:44.022 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:44.022 + diff -u /tmp/62.b9Q /tmp/spdk_tgt_config.json.kpL 00:08:44.022 + ret=1 00:08:44.022 + echo '=== Start of file: /tmp/62.b9Q ===' 00:08:44.022 + cat /tmp/62.b9Q 00:08:44.022 + echo '=== End of file: /tmp/62.b9Q ===' 00:08:44.022 + echo '' 00:08:44.022 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kpL ===' 00:08:44.022 + cat /tmp/spdk_tgt_config.json.kpL 00:08:44.022 + echo '=== End of file: /tmp/spdk_tgt_config.json.kpL ===' 00:08:44.022 + echo '' 00:08:44.022 + rm /tmp/62.b9Q /tmp/spdk_tgt_config.json.kpL 00:08:44.022 + exit 1 00:08:44.022 INFO: configuration change detected. 00:08:44.022 16:24:20 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:08:44.022 16:24:20 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:08:44.022 16:24:20 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:08:44.022 16:24:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:44.022 16:24:20 -- common/autotest_common.sh@10 -- # set +x 00:08:44.022 16:24:20 -- json_config/json_config.sh@360 -- # local ret=0 00:08:44.022 16:24:20 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:08:44.022 16:24:20 -- json_config/json_config.sh@370 -- # [[ -n 105607 ]] 00:08:44.022 16:24:20 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:08:44.022 16:24:20 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:08:44.022 16:24:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:44.022 16:24:20 -- common/autotest_common.sh@10 -- # set +x 00:08:44.022 16:24:20 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:08:44.022 16:24:20 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:08:44.022 16:24:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:08:44.280 16:24:20 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:08:44.280 16:24:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:08:44.280 16:24:21 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:08:44.280 16:24:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:08:44.537 16:24:21 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:08:44.537 16:24:21 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:08:44.796 16:24:21 -- json_config/json_config.sh@246 -- # uname -s 00:08:44.796 16:24:21 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:08:44.796 16:24:21 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:08:44.796 16:24:21 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:08:44.796 16:24:21 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:08:44.796 16:24:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:44.796 16:24:21 -- common/autotest_common.sh@10 -- # set +x 00:08:44.796 16:24:21 -- json_config/json_config.sh@376 -- # killprocess 105607 00:08:44.796 16:24:21 -- common/autotest_common.sh@926 -- # '[' -z 105607 ']' 00:08:44.796 16:24:21 -- common/autotest_common.sh@930 -- # kill -0 105607 00:08:44.796 16:24:21 -- common/autotest_common.sh@931 -- # uname 00:08:44.796 16:24:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:44.796 16:24:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105607 00:08:44.796 16:24:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:44.796 16:24:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:44.796 killing process with pid 105607 00:08:44.796 16:24:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105607' 00:08:44.796 16:24:21 -- common/autotest_common.sh@945 -- # kill 105607 00:08:44.796 16:24:21 -- common/autotest_common.sh@950 -- # wait 105607 00:08:45.731 16:24:22 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:45.731 16:24:22 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:08:45.731 16:24:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:45.731 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:08:45.731 16:24:22 -- json_config/json_config.sh@381 -- # return 0 00:08:45.731 INFO: Success 00:08:45.731 16:24:22 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:08:45.731 00:08:45.731 real 0m13.646s 00:08:45.731 user 0m19.967s 00:08:45.731 sys 0m2.314s 00:08:45.731 16:24:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:45.731 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:08:45.731 ************************************ 00:08:45.731 END TEST json_config 00:08:45.731 ************************************ 00:08:45.731 16:24:22 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:45.731 16:24:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:45.731 16:24:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.731 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:08:45.731 ************************************ 00:08:45.731 START TEST json_config_extra_key 00:08:45.731 ************************************ 00:08:45.731 16:24:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:45.731 16:24:22 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:45.731 16:24:22 -- nvmf/common.sh@7 -- # uname -s 00:08:45.731 16:24:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.731 16:24:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.731 16:24:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.731 16:24:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.731 16:24:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.731 16:24:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.731 16:24:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.731 16:24:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.731 16:24:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.731 16:24:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.731 16:24:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f64b2edb-0a0c-475f-a3fa-c15be3076fe4 00:08:45.731 16:24:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=f64b2edb-0a0c-475f-a3fa-c15be3076fe4 00:08:45.731 16:24:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.731 16:24:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.731 16:24:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:45.731 16:24:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.731 16:24:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.731 16:24:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.731 16:24:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.731 16:24:22 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:45.731 16:24:22 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:45.731 16:24:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:45.731 16:24:22 -- paths/export.sh@5 -- # export PATH 00:08:45.731 16:24:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:08:45.731 16:24:22 -- nvmf/common.sh@46 -- # : 0 00:08:45.731 16:24:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:45.731 16:24:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:45.731 16:24:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:45.731 16:24:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.731 16:24:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.731 16:24:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:45.731 16:24:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:45.731 16:24:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@16 -- # app_pid=([target]="") 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@17 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@18 -- # app_params=([target]='-m 0x1 -s 1024') 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@19 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:45.990 INFO: launching applications... 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@25 -- # shift 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=105784 00:08:45.990 Waiting for target to run... 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 105784 /var/tmp/spdk_tgt.sock 00:08:45.990 16:24:22 -- common/autotest_common.sh@819 -- # '[' -z 105784 ']' 00:08:45.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:45.990 16:24:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:45.990 16:24:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:45.990 16:24:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:45.990 16:24:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:45.990 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:08:45.990 16:24:22 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:45.990 [2024-07-11 16:24:22.607944] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:45.990 [2024-07-11 16:24:22.608401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105784 ] 00:08:46.555 [2024-07-11 16:24:23.057854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.555 [2024-07-11 16:24:23.226089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.555 [2024-07-11 16:24:23.226301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.930 16:24:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:47.930 16:24:24 -- common/autotest_common.sh@852 -- # return 0 00:08:47.930 16:24:24 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:08:47.930 00:08:47.930 INFO: shutting down applications... 00:08:47.930 16:24:24 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:08:47.930 16:24:24 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:08:47.930 16:24:24 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:08:47.930 16:24:24 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:08:47.930 16:24:24 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 105784 ]] 00:08:47.930 16:24:24 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 105784 00:08:47.930 16:24:24 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:08:47.930 16:24:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:47.930 16:24:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105784 00:08:47.930 16:24:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:48.189 16:24:24 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:48.189 16:24:24 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:48.189 16:24:24 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105784 00:08:48.189 16:24:24 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:48.756 16:24:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:48.756 16:24:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:48.756 16:24:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105784 00:08:48.756 16:24:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:49.016 16:24:25 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:49.016 16:24:25 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:49.016 16:24:25 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105784 00:08:49.016 16:24:25 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:49.583 16:24:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:49.583 16:24:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:49.583 16:24:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105784 00:08:49.583 16:24:26 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:08:50.151 16:24:26 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:08:50.151 16:24:26 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:08:50.151 16:24:26 -- json_config/json_config_extra_key.sh@50 -- # kill -0 105784 00:08:50.151 16:24:26 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:08:50.151 16:24:26 -- json_config/json_config_extra_key.sh@52 -- # break 00:08:50.151 SPDK target shutdown done 00:08:50.151 16:24:26 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:08:50.151 16:24:26 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:08:50.151 Success 00:08:50.151 16:24:26 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:08:50.151 00:08:50.151 real 0m4.345s 00:08:50.151 user 0m4.322s 00:08:50.151 sys 0m0.558s 00:08:50.151 16:24:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.151 16:24:26 -- common/autotest_common.sh@10 -- # set +x 00:08:50.151 ************************************ 00:08:50.151 END TEST json_config_extra_key 00:08:50.151 ************************************ 00:08:50.151 16:24:26 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:50.151 16:24:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:50.151 16:24:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.151 16:24:26 -- common/autotest_common.sh@10 -- # set +x 00:08:50.151 ************************************ 00:08:50.151 START TEST alias_rpc 00:08:50.151 ************************************ 00:08:50.151 16:24:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:50.151 * Looking for test storage... 00:08:50.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:50.151 16:24:26 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:50.151 16:24:26 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=105919 00:08:50.151 16:24:26 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 105919 00:08:50.151 16:24:26 -- common/autotest_common.sh@819 -- # '[' -z 105919 ']' 00:08:50.151 16:24:26 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:50.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.151 16:24:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.151 16:24:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:50.151 16:24:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.151 16:24:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:50.151 16:24:26 -- common/autotest_common.sh@10 -- # set +x 00:08:50.410 [2024-07-11 16:24:27.032280] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:50.411 [2024-07-11 16:24:27.032472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105919 ] 00:08:50.411 [2024-07-11 16:24:27.199840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.669 [2024-07-11 16:24:27.402517] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:50.669 [2024-07-11 16:24:27.402813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.045 16:24:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:52.045 16:24:28 -- common/autotest_common.sh@852 -- # return 0 00:08:52.045 16:24:28 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:52.304 16:24:28 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 105919 00:08:52.304 16:24:28 -- common/autotest_common.sh@926 -- # '[' -z 105919 ']' 00:08:52.304 16:24:28 -- common/autotest_common.sh@930 -- # kill -0 105919 00:08:52.304 16:24:28 -- common/autotest_common.sh@931 -- # uname 00:08:52.304 16:24:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:52.304 16:24:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 105919 00:08:52.304 16:24:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:52.304 killing process with pid 105919 00:08:52.304 16:24:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:52.304 16:24:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 105919' 00:08:52.304 16:24:28 -- common/autotest_common.sh@945 -- # kill 105919 00:08:52.304 16:24:28 -- common/autotest_common.sh@950 -- # wait 105919 00:08:54.209 ************************************ 00:08:54.209 END TEST alias_rpc 00:08:54.209 ************************************ 00:08:54.209 00:08:54.209 real 0m4.056s 00:08:54.209 user 0m4.418s 00:08:54.209 sys 0m0.504s 00:08:54.209 16:24:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.209 16:24:30 -- common/autotest_common.sh@10 -- # set +x 00:08:54.209 16:24:30 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:08:54.209 16:24:30 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:54.209 16:24:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:54.209 16:24:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:54.209 16:24:30 -- common/autotest_common.sh@10 -- # set +x 00:08:54.209 ************************************ 00:08:54.209 START TEST spdkcli_tcp 00:08:54.209 ************************************ 00:08:54.209 16:24:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:54.467 * Looking for test storage... 00:08:54.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:54.467 16:24:31 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:54.467 16:24:31 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:54.467 16:24:31 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:54.467 16:24:31 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:54.467 16:24:31 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:54.467 16:24:31 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:54.467 16:24:31 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:54.467 16:24:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:54.467 16:24:31 -- common/autotest_common.sh@10 -- # set +x 00:08:54.467 16:24:31 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=106024 00:08:54.467 16:24:31 -- spdkcli/tcp.sh@27 -- # waitforlisten 106024 00:08:54.467 16:24:31 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:54.467 16:24:31 -- common/autotest_common.sh@819 -- # '[' -z 106024 ']' 00:08:54.467 16:24:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.467 16:24:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:54.467 16:24:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.467 16:24:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:54.467 16:24:31 -- common/autotest_common.sh@10 -- # set +x 00:08:54.467 [2024-07-11 16:24:31.132739] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:54.467 [2024-07-11 16:24:31.132921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106024 ] 00:08:54.725 [2024-07-11 16:24:31.307283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:54.983 [2024-07-11 16:24:31.537218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:54.983 [2024-07-11 16:24:31.537592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.983 [2024-07-11 16:24:31.537590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.358 16:24:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:56.358 16:24:32 -- common/autotest_common.sh@852 -- # return 0 00:08:56.358 16:24:32 -- spdkcli/tcp.sh@31 -- # socat_pid=106060 00:08:56.358 16:24:32 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:56.358 16:24:32 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:56.358 [ 00:08:56.358 "spdk_get_version", 00:08:56.358 "rpc_get_methods", 00:08:56.358 "trace_get_info", 00:08:56.358 "trace_get_tpoint_group_mask", 00:08:56.358 "trace_disable_tpoint_group", 00:08:56.358 "trace_enable_tpoint_group", 00:08:56.359 "trace_clear_tpoint_mask", 00:08:56.359 "trace_set_tpoint_mask", 00:08:56.359 "framework_get_pci_devices", 00:08:56.359 "framework_get_config", 00:08:56.359 "framework_get_subsystems", 00:08:56.359 "iobuf_get_stats", 00:08:56.359 "iobuf_set_options", 00:08:56.359 "sock_set_default_impl", 00:08:56.359 "sock_impl_set_options", 00:08:56.359 "sock_impl_get_options", 00:08:56.359 "vmd_rescan", 00:08:56.359 "vmd_remove_device", 00:08:56.359 "vmd_enable", 00:08:56.359 "accel_get_stats", 00:08:56.359 "accel_set_options", 00:08:56.359 "accel_set_driver", 00:08:56.359 "accel_crypto_key_destroy", 00:08:56.359 "accel_crypto_keys_get", 00:08:56.359 "accel_crypto_key_create", 00:08:56.359 "accel_assign_opc", 00:08:56.359 "accel_get_module_info", 00:08:56.359 "accel_get_opc_assignments", 00:08:56.359 "notify_get_notifications", 00:08:56.359 "notify_get_types", 00:08:56.359 "bdev_get_histogram", 00:08:56.359 "bdev_enable_histogram", 00:08:56.359 "bdev_set_qos_limit", 00:08:56.359 "bdev_set_qd_sampling_period", 00:08:56.359 "bdev_get_bdevs", 00:08:56.359 "bdev_reset_iostat", 00:08:56.359 "bdev_get_iostat", 00:08:56.359 "bdev_examine", 00:08:56.359 "bdev_wait_for_examine", 00:08:56.359 "bdev_set_options", 00:08:56.359 "scsi_get_devices", 00:08:56.359 "thread_set_cpumask", 00:08:56.359 "framework_get_scheduler", 00:08:56.359 "framework_set_scheduler", 00:08:56.359 "framework_get_reactors", 00:08:56.359 "thread_get_io_channels", 00:08:56.359 "thread_get_pollers", 00:08:56.359 "thread_get_stats", 00:08:56.359 "framework_monitor_context_switch", 00:08:56.359 "spdk_kill_instance", 00:08:56.359 "log_enable_timestamps", 00:08:56.359 "log_get_flags", 00:08:56.359 "log_clear_flag", 00:08:56.359 "log_set_flag", 00:08:56.359 "log_get_level", 00:08:56.359 "log_set_level", 00:08:56.359 "log_get_print_level", 00:08:56.359 "log_set_print_level", 00:08:56.359 "framework_enable_cpumask_locks", 00:08:56.359 "framework_disable_cpumask_locks", 00:08:56.359 "framework_wait_init", 00:08:56.359 "framework_start_init", 00:08:56.359 "virtio_blk_create_transport", 00:08:56.359 "virtio_blk_get_transports", 00:08:56.359 "vhost_controller_set_coalescing", 00:08:56.359 "vhost_get_controllers", 00:08:56.359 "vhost_delete_controller", 00:08:56.359 "vhost_create_blk_controller", 00:08:56.359 "vhost_scsi_controller_remove_target", 00:08:56.359 "vhost_scsi_controller_add_target", 00:08:56.359 "vhost_start_scsi_controller", 00:08:56.359 "vhost_create_scsi_controller", 00:08:56.359 "nbd_get_disks", 00:08:56.359 "nbd_stop_disk", 00:08:56.359 "nbd_start_disk", 00:08:56.359 "env_dpdk_get_mem_stats", 00:08:56.359 "nvmf_subsystem_get_listeners", 00:08:56.359 "nvmf_subsystem_get_qpairs", 00:08:56.359 "nvmf_subsystem_get_controllers", 00:08:56.359 "nvmf_get_stats", 00:08:56.359 "nvmf_get_transports", 00:08:56.359 "nvmf_create_transport", 00:08:56.359 "nvmf_get_targets", 00:08:56.359 "nvmf_delete_target", 00:08:56.359 "nvmf_create_target", 00:08:56.359 "nvmf_subsystem_allow_any_host", 00:08:56.359 "nvmf_subsystem_remove_host", 00:08:56.359 "nvmf_subsystem_add_host", 00:08:56.359 "nvmf_subsystem_remove_ns", 00:08:56.359 "nvmf_subsystem_add_ns", 00:08:56.359 "nvmf_subsystem_listener_set_ana_state", 00:08:56.359 "nvmf_discovery_get_referrals", 00:08:56.359 "nvmf_discovery_remove_referral", 00:08:56.359 "nvmf_discovery_add_referral", 00:08:56.359 "nvmf_subsystem_remove_listener", 00:08:56.359 "nvmf_subsystem_add_listener", 00:08:56.359 "nvmf_delete_subsystem", 00:08:56.359 "nvmf_create_subsystem", 00:08:56.359 "nvmf_get_subsystems", 00:08:56.359 "nvmf_set_crdt", 00:08:56.359 "nvmf_set_config", 00:08:56.359 "nvmf_set_max_subsystems", 00:08:56.359 "iscsi_set_options", 00:08:56.359 "iscsi_get_auth_groups", 00:08:56.359 "iscsi_auth_group_remove_secret", 00:08:56.359 "iscsi_auth_group_add_secret", 00:08:56.359 "iscsi_delete_auth_group", 00:08:56.359 "iscsi_create_auth_group", 00:08:56.359 "iscsi_set_discovery_auth", 00:08:56.359 "iscsi_get_options", 00:08:56.359 "iscsi_target_node_request_logout", 00:08:56.359 "iscsi_target_node_set_redirect", 00:08:56.359 "iscsi_target_node_set_auth", 00:08:56.359 "iscsi_target_node_add_lun", 00:08:56.359 "iscsi_get_connections", 00:08:56.359 "iscsi_portal_group_set_auth", 00:08:56.359 "iscsi_start_portal_group", 00:08:56.359 "iscsi_delete_portal_group", 00:08:56.359 "iscsi_create_portal_group", 00:08:56.359 "iscsi_get_portal_groups", 00:08:56.359 "iscsi_delete_target_node", 00:08:56.359 "iscsi_target_node_remove_pg_ig_maps", 00:08:56.359 "iscsi_target_node_add_pg_ig_maps", 00:08:56.359 "iscsi_create_target_node", 00:08:56.359 "iscsi_get_target_nodes", 00:08:56.359 "iscsi_delete_initiator_group", 00:08:56.359 "iscsi_initiator_group_remove_initiators", 00:08:56.359 "iscsi_initiator_group_add_initiators", 00:08:56.359 "iscsi_create_initiator_group", 00:08:56.359 "iscsi_get_initiator_groups", 00:08:56.359 "iaa_scan_accel_module", 00:08:56.359 "dsa_scan_accel_module", 00:08:56.359 "ioat_scan_accel_module", 00:08:56.359 "accel_error_inject_error", 00:08:56.359 "bdev_iscsi_delete", 00:08:56.359 "bdev_iscsi_create", 00:08:56.359 "bdev_iscsi_set_options", 00:08:56.359 "bdev_virtio_attach_controller", 00:08:56.359 "bdev_virtio_scsi_get_devices", 00:08:56.359 "bdev_virtio_detach_controller", 00:08:56.359 "bdev_virtio_blk_set_hotplug", 00:08:56.359 "bdev_ftl_set_property", 00:08:56.359 "bdev_ftl_get_properties", 00:08:56.359 "bdev_ftl_get_stats", 00:08:56.359 "bdev_ftl_unmap", 00:08:56.359 "bdev_ftl_unload", 00:08:56.359 "bdev_ftl_delete", 00:08:56.359 "bdev_ftl_load", 00:08:56.359 "bdev_ftl_create", 00:08:56.359 "bdev_aio_delete", 00:08:56.359 "bdev_aio_rescan", 00:08:56.359 "bdev_aio_create", 00:08:56.359 "blobfs_create", 00:08:56.359 "blobfs_detect", 00:08:56.359 "blobfs_set_cache_size", 00:08:56.359 "bdev_zone_block_delete", 00:08:56.359 "bdev_zone_block_create", 00:08:56.359 "bdev_delay_delete", 00:08:56.359 "bdev_delay_create", 00:08:56.359 "bdev_delay_update_latency", 00:08:56.359 "bdev_split_delete", 00:08:56.359 "bdev_split_create", 00:08:56.359 "bdev_error_inject_error", 00:08:56.359 "bdev_error_delete", 00:08:56.359 "bdev_error_create", 00:08:56.359 "bdev_raid_set_options", 00:08:56.359 "bdev_raid_remove_base_bdev", 00:08:56.359 "bdev_raid_add_base_bdev", 00:08:56.359 "bdev_raid_delete", 00:08:56.359 "bdev_raid_create", 00:08:56.359 "bdev_raid_get_bdevs", 00:08:56.359 "bdev_lvol_grow_lvstore", 00:08:56.359 "bdev_lvol_get_lvols", 00:08:56.359 "bdev_lvol_get_lvstores", 00:08:56.359 "bdev_lvol_delete", 00:08:56.359 "bdev_lvol_set_read_only", 00:08:56.359 "bdev_lvol_resize", 00:08:56.359 "bdev_lvol_decouple_parent", 00:08:56.359 "bdev_lvol_inflate", 00:08:56.359 "bdev_lvol_rename", 00:08:56.359 "bdev_lvol_clone_bdev", 00:08:56.359 "bdev_lvol_clone", 00:08:56.359 "bdev_lvol_snapshot", 00:08:56.359 "bdev_lvol_create", 00:08:56.359 "bdev_lvol_delete_lvstore", 00:08:56.359 "bdev_lvol_rename_lvstore", 00:08:56.359 "bdev_lvol_create_lvstore", 00:08:56.359 "bdev_passthru_delete", 00:08:56.359 "bdev_passthru_create", 00:08:56.359 "bdev_nvme_cuse_unregister", 00:08:56.359 "bdev_nvme_cuse_register", 00:08:56.359 "bdev_opal_new_user", 00:08:56.359 "bdev_opal_set_lock_state", 00:08:56.359 "bdev_opal_delete", 00:08:56.359 "bdev_opal_get_info", 00:08:56.359 "bdev_opal_create", 00:08:56.359 "bdev_nvme_opal_revert", 00:08:56.359 "bdev_nvme_opal_init", 00:08:56.359 "bdev_nvme_send_cmd", 00:08:56.359 "bdev_nvme_get_path_iostat", 00:08:56.359 "bdev_nvme_get_mdns_discovery_info", 00:08:56.359 "bdev_nvme_stop_mdns_discovery", 00:08:56.359 "bdev_nvme_start_mdns_discovery", 00:08:56.359 "bdev_nvme_set_multipath_policy", 00:08:56.359 "bdev_nvme_set_preferred_path", 00:08:56.359 "bdev_nvme_get_io_paths", 00:08:56.359 "bdev_nvme_remove_error_injection", 00:08:56.359 "bdev_nvme_add_error_injection", 00:08:56.359 "bdev_nvme_get_discovery_info", 00:08:56.359 "bdev_nvme_stop_discovery", 00:08:56.359 "bdev_nvme_start_discovery", 00:08:56.359 "bdev_nvme_get_controller_health_info", 00:08:56.359 "bdev_nvme_disable_controller", 00:08:56.359 "bdev_nvme_enable_controller", 00:08:56.359 "bdev_nvme_reset_controller", 00:08:56.359 "bdev_nvme_get_transport_statistics", 00:08:56.359 "bdev_nvme_apply_firmware", 00:08:56.359 "bdev_nvme_detach_controller", 00:08:56.359 "bdev_nvme_get_controllers", 00:08:56.359 "bdev_nvme_attach_controller", 00:08:56.359 "bdev_nvme_set_hotplug", 00:08:56.359 "bdev_nvme_set_options", 00:08:56.359 "bdev_null_resize", 00:08:56.359 "bdev_null_delete", 00:08:56.359 "bdev_null_create", 00:08:56.359 "bdev_malloc_delete", 00:08:56.359 "bdev_malloc_create" 00:08:56.359 ] 00:08:56.359 16:24:33 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:56.359 16:24:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:56.359 16:24:33 -- common/autotest_common.sh@10 -- # set +x 00:08:56.359 16:24:33 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:56.359 16:24:33 -- spdkcli/tcp.sh@38 -- # killprocess 106024 00:08:56.359 16:24:33 -- common/autotest_common.sh@926 -- # '[' -z 106024 ']' 00:08:56.359 16:24:33 -- common/autotest_common.sh@930 -- # kill -0 106024 00:08:56.359 16:24:33 -- common/autotest_common.sh@931 -- # uname 00:08:56.359 16:24:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:56.359 16:24:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106024 00:08:56.359 16:24:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:56.359 16:24:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:56.359 16:24:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106024' 00:08:56.359 killing process with pid 106024 00:08:56.359 16:24:33 -- common/autotest_common.sh@945 -- # kill 106024 00:08:56.359 16:24:33 -- common/autotest_common.sh@950 -- # wait 106024 00:08:58.890 00:08:58.890 real 0m4.239s 00:08:58.890 user 0m7.835s 00:08:58.890 sys 0m0.565s 00:08:58.890 ************************************ 00:08:58.890 END TEST spdkcli_tcp 00:08:58.890 16:24:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.890 16:24:35 -- common/autotest_common.sh@10 -- # set +x 00:08:58.890 ************************************ 00:08:58.890 16:24:35 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:58.890 16:24:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:58.890 16:24:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:58.890 16:24:35 -- common/autotest_common.sh@10 -- # set +x 00:08:58.890 ************************************ 00:08:58.890 START TEST dpdk_mem_utility 00:08:58.890 ************************************ 00:08:58.890 16:24:35 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:58.890 * Looking for test storage... 00:08:58.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:58.890 16:24:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:58.890 16:24:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=106175 00:08:58.890 16:24:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:58.890 16:24:35 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 106175 00:08:58.890 16:24:35 -- common/autotest_common.sh@819 -- # '[' -z 106175 ']' 00:08:58.890 16:24:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.890 16:24:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:58.890 16:24:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.890 16:24:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:58.890 16:24:35 -- common/autotest_common.sh@10 -- # set +x 00:08:58.890 [2024-07-11 16:24:35.421662] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:58.890 [2024-07-11 16:24:35.422120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106175 ] 00:08:58.890 [2024-07-11 16:24:35.589014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.149 [2024-07-11 16:24:35.799685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:59.149 [2024-07-11 16:24:35.800110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.528 16:24:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:00.528 16:24:37 -- common/autotest_common.sh@852 -- # return 0 00:09:00.528 16:24:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:00.528 16:24:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:00.528 16:24:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:00.528 16:24:37 -- common/autotest_common.sh@10 -- # set +x 00:09:00.528 { 00:09:00.528 "filename": "/tmp/spdk_mem_dump.txt" 00:09:00.528 } 00:09:00.528 16:24:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:00.528 16:24:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:00.528 DPDK memory size 820.000000 MiB in 1 heap(s) 00:09:00.528 1 heaps totaling size 820.000000 MiB 00:09:00.528 size: 820.000000 MiB heap id: 0 00:09:00.528 end heaps---------- 00:09:00.528 8 mempools totaling size 598.116089 MiB 00:09:00.528 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:00.528 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:00.528 size: 84.521057 MiB name: bdev_io_106175 00:09:00.528 size: 51.011292 MiB name: evtpool_106175 00:09:00.528 size: 50.003479 MiB name: msgpool_106175 00:09:00.528 size: 21.763794 MiB name: PDU_Pool 00:09:00.528 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:00.528 size: 0.026123 MiB name: Session_Pool 00:09:00.528 end mempools------- 00:09:00.528 6 memzones totaling size 4.142822 MiB 00:09:00.528 size: 1.000366 MiB name: RG_ring_0_106175 00:09:00.528 size: 1.000366 MiB name: RG_ring_1_106175 00:09:00.528 size: 1.000366 MiB name: RG_ring_4_106175 00:09:00.528 size: 1.000366 MiB name: RG_ring_5_106175 00:09:00.528 size: 0.125366 MiB name: RG_ring_2_106175 00:09:00.528 size: 0.015991 MiB name: RG_ring_3_106175 00:09:00.528 end memzones------- 00:09:00.528 16:24:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:00.528 heap id: 0 total size: 820.000000 MiB number of busy elements: 227 number of free elements: 18 00:09:00.528 list of free elements. size: 18.469482 MiB 00:09:00.528 element at address: 0x200000400000 with size: 1.999451 MiB 00:09:00.528 element at address: 0x200000800000 with size: 1.996887 MiB 00:09:00.528 element at address: 0x200007000000 with size: 1.995972 MiB 00:09:00.528 element at address: 0x20000b200000 with size: 1.995972 MiB 00:09:00.528 element at address: 0x200019100040 with size: 0.999939 MiB 00:09:00.528 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:00.528 element at address: 0x200019600000 with size: 0.999329 MiB 00:09:00.528 element at address: 0x200003e00000 with size: 0.996094 MiB 00:09:00.528 element at address: 0x200032200000 with size: 0.994324 MiB 00:09:00.528 element at address: 0x200018e00000 with size: 0.959656 MiB 00:09:00.528 element at address: 0x200019900040 with size: 0.937256 MiB 00:09:00.528 element at address: 0x200000200000 with size: 0.835083 MiB 00:09:00.528 element at address: 0x20001b000000 with size: 0.560974 MiB 00:09:00.528 element at address: 0x200019200000 with size: 0.489197 MiB 00:09:00.528 element at address: 0x200019a00000 with size: 0.485413 MiB 00:09:00.528 element at address: 0x200013800000 with size: 0.468140 MiB 00:09:00.528 element at address: 0x200028400000 with size: 0.399719 MiB 00:09:00.528 element at address: 0x200003a00000 with size: 0.356140 MiB 00:09:00.528 list of standard malloc elements. size: 199.266113 MiB 00:09:00.528 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:09:00.528 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:09:00.528 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:09:00.528 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:00.528 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:00.528 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:00.528 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:09:00.528 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:00.528 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:09:00.528 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:09:00.528 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:09:00.528 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200003aff980 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200003affa80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200003eff000 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200013877d80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200013877e80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200013877f80 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200013878080 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200013878180 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200013878280 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200013878380 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200013878480 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200013878580 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:00.528 element at address: 0x200019abc680 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b08f9c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:09:00.528 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:09:00.529 element at address: 0x200028466540 with size: 0.000244 MiB 00:09:00.529 element at address: 0x200028466640 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846d300 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846d580 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846d680 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846d780 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846d880 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846d980 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846da80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846db80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846de80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846df80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846e080 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846e180 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846e280 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846e380 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846e480 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846e580 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846e680 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846e780 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846e880 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846e980 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846f080 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846f180 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846f280 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846f380 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846f480 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846f580 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846f680 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846f780 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846f880 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846f980 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:09:00.529 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:09:00.529 list of memzone associated elements. size: 602.264404 MiB 00:09:00.529 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:09:00.529 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:00.529 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:09:00.529 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:00.529 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:09:00.529 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_106175_0 00:09:00.529 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:09:00.529 associated memzone info: size: 48.002930 MiB name: MP_evtpool_106175_0 00:09:00.529 element at address: 0x200003fff340 with size: 48.003113 MiB 00:09:00.529 associated memzone info: size: 48.002930 MiB name: MP_msgpool_106175_0 00:09:00.529 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:09:00.529 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:00.529 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:09:00.529 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:00.529 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:09:00.529 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_106175 00:09:00.529 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:09:00.529 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_106175 00:09:00.529 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:00.529 associated memzone info: size: 1.007996 MiB name: MP_evtpool_106175 00:09:00.529 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:00.529 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:00.529 element at address: 0x200019abc780 with size: 1.008179 MiB 00:09:00.529 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:00.529 element at address: 0x200018efde00 with size: 1.008179 MiB 00:09:00.529 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:00.529 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:09:00.529 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:00.529 element at address: 0x200003eff100 with size: 1.000549 MiB 00:09:00.529 associated memzone info: size: 1.000366 MiB name: RG_ring_0_106175 00:09:00.529 element at address: 0x200003affb80 with size: 1.000549 MiB 00:09:00.529 associated memzone info: size: 1.000366 MiB name: RG_ring_1_106175 00:09:00.529 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:09:00.529 associated memzone info: size: 1.000366 MiB name: RG_ring_4_106175 00:09:00.529 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:09:00.529 associated memzone info: size: 1.000366 MiB name: RG_ring_5_106175 00:09:00.529 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:09:00.529 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_106175 00:09:00.530 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:09:00.530 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:00.530 element at address: 0x200013878680 with size: 0.500549 MiB 00:09:00.530 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:00.530 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:09:00.530 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:00.530 element at address: 0x200003adf740 with size: 0.125549 MiB 00:09:00.530 associated memzone info: size: 0.125366 MiB name: RG_ring_2_106175 00:09:00.530 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:09:00.530 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:00.530 element at address: 0x200028466740 with size: 0.023804 MiB 00:09:00.530 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:00.530 element at address: 0x200003adb500 with size: 0.016174 MiB 00:09:00.530 associated memzone info: size: 0.015991 MiB name: RG_ring_3_106175 00:09:00.530 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:09:00.530 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:00.530 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:09:00.530 associated memzone info: size: 0.000183 MiB name: MP_msgpool_106175 00:09:00.530 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:09:00.530 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_106175 00:09:00.530 element at address: 0x20002846d400 with size: 0.000366 MiB 00:09:00.530 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:00.530 16:24:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:00.530 16:24:37 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 106175 00:09:00.530 16:24:37 -- common/autotest_common.sh@926 -- # '[' -z 106175 ']' 00:09:00.530 16:24:37 -- common/autotest_common.sh@930 -- # kill -0 106175 00:09:00.530 16:24:37 -- common/autotest_common.sh@931 -- # uname 00:09:00.530 16:24:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:00.530 16:24:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106175 00:09:00.530 killing process with pid 106175 00:09:00.530 16:24:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:00.530 16:24:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:00.530 16:24:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106175' 00:09:00.530 16:24:37 -- common/autotest_common.sh@945 -- # kill 106175 00:09:00.530 16:24:37 -- common/autotest_common.sh@950 -- # wait 106175 00:09:03.063 ************************************ 00:09:03.063 END TEST dpdk_mem_utility 00:09:03.063 ************************************ 00:09:03.063 00:09:03.063 real 0m4.161s 00:09:03.063 user 0m4.332s 00:09:03.063 sys 0m0.551s 00:09:03.063 16:24:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:03.063 16:24:39 -- common/autotest_common.sh@10 -- # set +x 00:09:03.063 16:24:39 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:03.063 16:24:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:03.063 16:24:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.063 16:24:39 -- common/autotest_common.sh@10 -- # set +x 00:09:03.063 ************************************ 00:09:03.063 START TEST event 00:09:03.063 ************************************ 00:09:03.063 16:24:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:03.063 * Looking for test storage... 00:09:03.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:03.063 16:24:39 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:03.063 16:24:39 -- bdev/nbd_common.sh@6 -- # set -e 00:09:03.063 16:24:39 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:03.063 16:24:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:09:03.063 16:24:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:03.063 16:24:39 -- common/autotest_common.sh@10 -- # set +x 00:09:03.063 ************************************ 00:09:03.063 START TEST event_perf 00:09:03.063 ************************************ 00:09:03.063 16:24:39 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:03.063 Running I/O for 1 seconds...[2024-07-11 16:24:39.624501] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:03.063 [2024-07-11 16:24:39.624838] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106289 ] 00:09:03.063 [2024-07-11 16:24:39.815864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.322 [2024-07-11 16:24:40.037090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.322 [2024-07-11 16:24:40.037223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.322 [2024-07-11 16:24:40.037372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.322 [2024-07-11 16:24:40.037369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.698 Running I/O for 1 seconds... 00:09:04.698 lcore 0: 109038 00:09:04.698 lcore 1: 109036 00:09:04.698 lcore 2: 109035 00:09:04.698 lcore 3: 109037 00:09:04.698 done. 00:09:04.698 00:09:04.698 real 0m1.858s 00:09:04.698 user 0m4.633s 00:09:04.698 sys 0m0.124s 00:09:04.698 16:24:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:04.698 ************************************ 00:09:04.698 END TEST event_perf 00:09:04.698 ************************************ 00:09:04.698 16:24:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.698 16:24:41 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:04.698 16:24:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:04.698 16:24:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:04.698 16:24:41 -- common/autotest_common.sh@10 -- # set +x 00:09:04.698 ************************************ 00:09:04.698 START TEST event_reactor 00:09:04.698 ************************************ 00:09:04.698 16:24:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:04.956 [2024-07-11 16:24:41.538293] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:04.956 [2024-07-11 16:24:41.539021] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106342 ] 00:09:04.956 [2024-07-11 16:24:41.707951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.214 [2024-07-11 16:24:41.906768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.599 test_start 00:09:06.599 oneshot 00:09:06.599 tick 100 00:09:06.599 tick 100 00:09:06.599 tick 250 00:09:06.599 tick 100 00:09:06.599 tick 100 00:09:06.599 tick 100 00:09:06.599 tick 250 00:09:06.599 tick 500 00:09:06.599 tick 100 00:09:06.599 tick 100 00:09:06.599 tick 250 00:09:06.599 tick 100 00:09:06.599 tick 100 00:09:06.599 test_end 00:09:06.599 00:09:06.599 real 0m1.790s 00:09:06.599 user 0m1.556s 00:09:06.599 sys 0m0.132s 00:09:06.599 16:24:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.599 16:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:06.599 ************************************ 00:09:06.599 END TEST event_reactor 00:09:06.599 ************************************ 00:09:06.599 16:24:43 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:06.599 16:24:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:06.599 16:24:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:06.600 16:24:43 -- common/autotest_common.sh@10 -- # set +x 00:09:06.600 ************************************ 00:09:06.600 START TEST event_reactor_perf 00:09:06.600 ************************************ 00:09:06.600 16:24:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:06.600 [2024-07-11 16:24:43.387355] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:06.600 [2024-07-11 16:24:43.387572] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106392 ] 00:09:06.875 [2024-07-11 16:24:43.558511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.133 [2024-07-11 16:24:43.769581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.507 test_start 00:09:08.507 test_end 00:09:08.507 Performance: 318408 events per second 00:09:08.507 00:09:08.507 real 0m1.823s 00:09:08.507 user 0m1.587s 00:09:08.507 sys 0m0.132s 00:09:08.507 16:24:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.507 16:24:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.507 ************************************ 00:09:08.507 END TEST event_reactor_perf 00:09:08.507 ************************************ 00:09:08.507 16:24:45 -- event/event.sh@49 -- # uname -s 00:09:08.507 16:24:45 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:08.507 16:24:45 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:08.507 16:24:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:08.507 16:24:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.507 16:24:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.507 ************************************ 00:09:08.507 START TEST event_scheduler 00:09:08.507 ************************************ 00:09:08.507 16:24:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:08.507 * Looking for test storage... 00:09:08.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:08.507 16:24:45 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:08.507 16:24:45 -- scheduler/scheduler.sh@35 -- # scheduler_pid=106481 00:09:08.507 16:24:45 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:08.507 16:24:45 -- scheduler/scheduler.sh@37 -- # waitforlisten 106481 00:09:08.507 16:24:45 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:08.508 16:24:45 -- common/autotest_common.sh@819 -- # '[' -z 106481 ']' 00:09:08.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.508 16:24:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.508 16:24:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.508 16:24:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.508 16:24:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.508 16:24:45 -- common/autotest_common.sh@10 -- # set +x 00:09:08.766 [2024-07-11 16:24:45.371936] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:08.766 [2024-07-11 16:24:45.372152] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106481 ] 00:09:08.766 [2024-07-11 16:24:45.557410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.024 [2024-07-11 16:24:45.795753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.024 [2024-07-11 16:24:45.796016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.024 [2024-07-11 16:24:45.795935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.024 [2024-07-11 16:24:45.796021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.591 16:24:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:09.591 16:24:46 -- common/autotest_common.sh@852 -- # return 0 00:09:09.591 16:24:46 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:09.591 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.591 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:09.591 POWER: Env isn't set yet! 00:09:09.591 POWER: Attempting to initialise ACPI cpufreq power management... 00:09:09.591 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:09.591 POWER: Cannot set governor of lcore 0 to userspace 00:09:09.591 POWER: Attempting to initialise PSTAT power management... 00:09:09.591 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:09.591 POWER: Cannot set governor of lcore 0 to performance 00:09:09.591 POWER: Attempting to initialise AMD PSTATE power management... 00:09:09.591 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:09.591 POWER: Cannot set governor of lcore 0 to userspace 00:09:09.591 POWER: Attempting to initialise CPPC power management... 00:09:09.591 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:09.591 POWER: Cannot set governor of lcore 0 to userspace 00:09:09.591 POWER: Attempting to initialise VM power management... 00:09:09.591 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:09.591 POWER: Unable to set Power Management Environment for lcore 0 00:09:09.591 [2024-07-11 16:24:46.334342] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:09:09.592 [2024-07-11 16:24:46.334382] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:09:09.592 [2024-07-11 16:24:46.334407] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:09:09.592 [2024-07-11 16:24:46.334444] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:09.592 [2024-07-11 16:24:46.334492] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:09.592 [2024-07-11 16:24:46.334523] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:09.592 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.592 16:24:46 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:09.592 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.592 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.159 [2024-07-11 16:24:46.691320] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:10.159 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.159 16:24:46 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:10.159 16:24:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:10.159 16:24:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.159 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.159 ************************************ 00:09:10.159 START TEST scheduler_create_thread 00:09:10.159 ************************************ 00:09:10.160 16:24:46 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.160 2 00:09:10.160 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.160 3 00:09:10.160 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.160 4 00:09:10.160 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.160 5 00:09:10.160 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.160 6 00:09:10.160 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.160 7 00:09:10.160 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.160 8 00:09:10.160 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.160 9 00:09:10.160 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.160 10 00:09:10.160 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.160 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:10.160 16:24:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:10.160 16:24:46 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:10.160 16:24:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:10.160 16:24:46 -- common/autotest_common.sh@10 -- # set +x 00:09:11.095 16:24:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.095 16:24:47 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:11.095 16:24:47 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:11.095 16:24:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.095 16:24:47 -- common/autotest_common.sh@10 -- # set +x 00:09:12.470 16:24:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.470 00:09:12.470 real 0m2.156s 00:09:12.470 user 0m0.011s 00:09:12.470 sys 0m0.000s 00:09:12.470 16:24:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:12.470 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:09:12.470 ************************************ 00:09:12.470 END TEST scheduler_create_thread 00:09:12.470 ************************************ 00:09:12.471 16:24:48 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:12.471 16:24:48 -- scheduler/scheduler.sh@46 -- # killprocess 106481 00:09:12.471 16:24:48 -- common/autotest_common.sh@926 -- # '[' -z 106481 ']' 00:09:12.471 16:24:48 -- common/autotest_common.sh@930 -- # kill -0 106481 00:09:12.471 16:24:48 -- common/autotest_common.sh@931 -- # uname 00:09:12.471 16:24:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:12.471 16:24:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106481 00:09:12.471 16:24:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:09:12.471 killing process with pid 106481 00:09:12.471 16:24:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:09:12.471 16:24:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106481' 00:09:12.471 16:24:48 -- common/autotest_common.sh@945 -- # kill 106481 00:09:12.471 16:24:48 -- common/autotest_common.sh@950 -- # wait 106481 00:09:12.729 [2024-07-11 16:24:49.341366] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:14.106 ************************************ 00:09:14.106 END TEST event_scheduler 00:09:14.106 ************************************ 00:09:14.106 00:09:14.106 real 0m5.289s 00:09:14.106 user 0m8.650s 00:09:14.106 sys 0m0.499s 00:09:14.106 16:24:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.106 16:24:50 -- common/autotest_common.sh@10 -- # set +x 00:09:14.106 16:24:50 -- event/event.sh@51 -- # modprobe -n nbd 00:09:14.106 16:24:50 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:14.106 16:24:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:14.106 16:24:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:14.106 16:24:50 -- common/autotest_common.sh@10 -- # set +x 00:09:14.106 ************************************ 00:09:14.106 START TEST app_repeat 00:09:14.106 ************************************ 00:09:14.106 16:24:50 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:09:14.106 16:24:50 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.106 16:24:50 -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:09:14.106 16:24:50 -- event/event.sh@13 -- # local nbd_list 00:09:14.106 16:24:50 -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:09:14.106 16:24:50 -- event/event.sh@14 -- # local bdev_list 00:09:14.106 16:24:50 -- event/event.sh@15 -- # local repeat_times=4 00:09:14.106 16:24:50 -- event/event.sh@17 -- # modprobe nbd 00:09:14.106 16:24:50 -- event/event.sh@19 -- # repeat_pid=106604 00:09:14.106 16:24:50 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:14.106 Process app_repeat pid: 106604 00:09:14.106 16:24:50 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:14.106 16:24:50 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 106604' 00:09:14.106 spdk_app_start Round 0 00:09:14.106 16:24:50 -- event/event.sh@23 -- # for i in {0..2} 00:09:14.106 16:24:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:14.106 16:24:50 -- event/event.sh@25 -- # waitforlisten 106604 /var/tmp/spdk-nbd.sock 00:09:14.106 16:24:50 -- common/autotest_common.sh@819 -- # '[' -z 106604 ']' 00:09:14.106 16:24:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:14.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:14.106 16:24:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:14.106 16:24:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:14.106 16:24:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:14.106 16:24:50 -- common/autotest_common.sh@10 -- # set +x 00:09:14.106 [2024-07-11 16:24:50.625867] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:14.106 [2024-07-11 16:24:50.626683] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106604 ] 00:09:14.106 [2024-07-11 16:24:50.800871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:14.365 [2024-07-11 16:24:51.027037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.365 [2024-07-11 16:24:51.027054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.932 16:24:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:14.933 16:24:51 -- common/autotest_common.sh@852 -- # return 0 00:09:14.933 16:24:51 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:15.191 Malloc0 00:09:15.191 16:24:51 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:15.449 Malloc1 00:09:15.708 16:24:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@12 -- # local i 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.708 16:24:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:15.708 /dev/nbd0 00:09:15.966 16:24:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:15.966 16:24:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:15.966 16:24:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:15.966 16:24:52 -- common/autotest_common.sh@857 -- # local i 00:09:15.966 16:24:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:15.966 16:24:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:15.966 16:24:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:15.966 16:24:52 -- common/autotest_common.sh@861 -- # break 00:09:15.966 16:24:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:15.966 16:24:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:15.966 16:24:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:15.966 1+0 records in 00:09:15.966 1+0 records out 00:09:15.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374542 s, 10.9 MB/s 00:09:15.966 16:24:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:15.966 16:24:52 -- common/autotest_common.sh@874 -- # size=4096 00:09:15.966 16:24:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:15.966 16:24:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:15.966 16:24:52 -- common/autotest_common.sh@877 -- # return 0 00:09:15.966 16:24:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:15.966 16:24:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:15.966 16:24:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:16.226 /dev/nbd1 00:09:16.226 16:24:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:16.226 16:24:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:16.226 16:24:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:16.226 16:24:52 -- common/autotest_common.sh@857 -- # local i 00:09:16.226 16:24:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:16.226 16:24:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:16.226 16:24:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:16.226 16:24:52 -- common/autotest_common.sh@861 -- # break 00:09:16.226 16:24:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:16.226 16:24:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:16.226 16:24:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:16.226 1+0 records in 00:09:16.226 1+0 records out 00:09:16.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368238 s, 11.1 MB/s 00:09:16.226 16:24:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:16.226 16:24:52 -- common/autotest_common.sh@874 -- # size=4096 00:09:16.226 16:24:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:16.226 16:24:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:16.226 16:24:52 -- common/autotest_common.sh@877 -- # return 0 00:09:16.226 16:24:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:16.226 16:24:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:16.226 16:24:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:16.226 16:24:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.226 16:24:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:16.484 { 00:09:16.484 "nbd_device": "/dev/nbd0", 00:09:16.484 "bdev_name": "Malloc0" 00:09:16.484 }, 00:09:16.484 { 00:09:16.484 "nbd_device": "/dev/nbd1", 00:09:16.484 "bdev_name": "Malloc1" 00:09:16.484 } 00:09:16.484 ]' 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:16.484 { 00:09:16.484 "nbd_device": "/dev/nbd0", 00:09:16.484 "bdev_name": "Malloc0" 00:09:16.484 }, 00:09:16.484 { 00:09:16.484 "nbd_device": "/dev/nbd1", 00:09:16.484 "bdev_name": "Malloc1" 00:09:16.484 } 00:09:16.484 ]' 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:16.484 /dev/nbd1' 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:16.484 /dev/nbd1' 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@65 -- # count=2 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@95 -- # count=2 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:16.484 256+0 records in 00:09:16.484 256+0 records out 00:09:16.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00753008 s, 139 MB/s 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:16.484 256+0 records in 00:09:16.484 256+0 records out 00:09:16.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287508 s, 36.5 MB/s 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:16.484 256+0 records in 00:09:16.484 256+0 records out 00:09:16.484 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287515 s, 36.5 MB/s 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@51 -- # local i 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.484 16:24:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:16.742 16:24:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:16.742 16:24:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:16.742 16:24:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:16.742 16:24:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.742 16:24:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.742 16:24:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:16.742 16:24:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:17.000 16:24:53 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:17.001 16:24:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.001 16:24:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:17.001 16:24:53 -- bdev/nbd_common.sh@41 -- # break 00:09:17.001 16:24:53 -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.001 16:24:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:17.001 16:24:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@41 -- # break 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@45 -- # return 0 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.259 16:24:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@65 -- # true 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@65 -- # count=0 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@104 -- # count=0 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:17.518 16:24:54 -- bdev/nbd_common.sh@109 -- # return 0 00:09:17.518 16:24:54 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:18.085 16:24:54 -- event/event.sh@35 -- # sleep 3 00:09:19.461 [2024-07-11 16:24:55.941731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:19.461 [2024-07-11 16:24:56.139222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.461 [2024-07-11 16:24:56.139234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.718 [2024-07-11 16:24:56.340137] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:19.718 [2024-07-11 16:24:56.340365] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:21.093 16:24:57 -- event/event.sh@23 -- # for i in {0..2} 00:09:21.093 16:24:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:21.093 spdk_app_start Round 1 00:09:21.093 16:24:57 -- event/event.sh@25 -- # waitforlisten 106604 /var/tmp/spdk-nbd.sock 00:09:21.093 16:24:57 -- common/autotest_common.sh@819 -- # '[' -z 106604 ']' 00:09:21.093 16:24:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:21.093 16:24:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:21.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:21.093 16:24:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:21.093 16:24:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:21.093 16:24:57 -- common/autotest_common.sh@10 -- # set +x 00:09:21.351 16:24:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:21.351 16:24:57 -- common/autotest_common.sh@852 -- # return 0 00:09:21.351 16:24:57 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:21.610 Malloc0 00:09:21.610 16:24:58 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:21.869 Malloc1 00:09:21.869 16:24:58 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@12 -- # local i 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:21.869 16:24:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:22.129 /dev/nbd0 00:09:22.129 16:24:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:22.129 16:24:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:22.129 16:24:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:22.129 16:24:58 -- common/autotest_common.sh@857 -- # local i 00:09:22.129 16:24:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:22.129 16:24:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:22.129 16:24:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:22.129 16:24:58 -- common/autotest_common.sh@861 -- # break 00:09:22.129 16:24:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:22.129 16:24:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:22.129 16:24:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:22.129 1+0 records in 00:09:22.129 1+0 records out 00:09:22.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333196 s, 12.3 MB/s 00:09:22.129 16:24:58 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.129 16:24:58 -- common/autotest_common.sh@874 -- # size=4096 00:09:22.129 16:24:58 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.129 16:24:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:22.129 16:24:58 -- common/autotest_common.sh@877 -- # return 0 00:09:22.129 16:24:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.129 16:24:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.129 16:24:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:22.388 /dev/nbd1 00:09:22.388 16:24:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:22.388 16:24:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:22.388 16:24:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:22.388 16:24:59 -- common/autotest_common.sh@857 -- # local i 00:09:22.388 16:24:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:22.388 16:24:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:22.388 16:24:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:22.388 16:24:59 -- common/autotest_common.sh@861 -- # break 00:09:22.388 16:24:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:22.388 16:24:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:22.388 16:24:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:22.388 1+0 records in 00:09:22.388 1+0 records out 00:09:22.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724552 s, 5.7 MB/s 00:09:22.388 16:24:59 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.388 16:24:59 -- common/autotest_common.sh@874 -- # size=4096 00:09:22.388 16:24:59 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:22.388 16:24:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:22.388 16:24:59 -- common/autotest_common.sh@877 -- # return 0 00:09:22.388 16:24:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:22.388 16:24:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:22.388 16:24:59 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:22.388 16:24:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.388 16:24:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:22.646 16:24:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:22.646 { 00:09:22.646 "nbd_device": "/dev/nbd0", 00:09:22.646 "bdev_name": "Malloc0" 00:09:22.646 }, 00:09:22.646 { 00:09:22.646 "nbd_device": "/dev/nbd1", 00:09:22.646 "bdev_name": "Malloc1" 00:09:22.646 } 00:09:22.646 ]' 00:09:22.646 16:24:59 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:22.646 { 00:09:22.646 "nbd_device": "/dev/nbd0", 00:09:22.646 "bdev_name": "Malloc0" 00:09:22.646 }, 00:09:22.646 { 00:09:22.646 "nbd_device": "/dev/nbd1", 00:09:22.646 "bdev_name": "Malloc1" 00:09:22.646 } 00:09:22.646 ]' 00:09:22.646 16:24:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:22.905 /dev/nbd1' 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:22.905 /dev/nbd1' 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@65 -- # count=2 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@95 -- # count=2 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:22.905 256+0 records in 00:09:22.905 256+0 records out 00:09:22.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00942823 s, 111 MB/s 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:22.905 16:24:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:22.905 256+0 records in 00:09:22.905 256+0 records out 00:09:22.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290381 s, 36.1 MB/s 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:22.906 256+0 records in 00:09:22.906 256+0 records out 00:09:22.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0360263 s, 29.1 MB/s 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@51 -- # local i 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:22.906 16:24:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@41 -- # break 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:23.164 16:24:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:23.423 16:25:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:23.423 16:25:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:23.423 16:25:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:23.423 16:25:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:23.423 16:25:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:23.423 16:25:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:23.423 16:25:00 -- bdev/nbd_common.sh@41 -- # break 00:09:23.423 16:25:00 -- bdev/nbd_common.sh@45 -- # return 0 00:09:23.423 16:25:00 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:23.423 16:25:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:23.423 16:25:00 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:23.682 16:25:00 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:23.682 16:25:00 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:23.682 16:25:00 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:23.942 16:25:00 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:23.942 16:25:00 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:23.942 16:25:00 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:23.942 16:25:00 -- bdev/nbd_common.sh@65 -- # true 00:09:23.942 16:25:00 -- bdev/nbd_common.sh@65 -- # count=0 00:09:23.942 16:25:00 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:23.942 16:25:00 -- bdev/nbd_common.sh@104 -- # count=0 00:09:23.942 16:25:00 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:23.942 16:25:00 -- bdev/nbd_common.sh@109 -- # return 0 00:09:23.942 16:25:00 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:24.201 16:25:00 -- event/event.sh@35 -- # sleep 3 00:09:25.579 [2024-07-11 16:25:02.199513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:25.839 [2024-07-11 16:25:02.394222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.839 [2024-07-11 16:25:02.394222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.839 [2024-07-11 16:25:02.590962] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:25.839 [2024-07-11 16:25:02.591071] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:27.214 spdk_app_start Round 2 00:09:27.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:27.214 16:25:03 -- event/event.sh@23 -- # for i in {0..2} 00:09:27.214 16:25:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:27.214 16:25:03 -- event/event.sh@25 -- # waitforlisten 106604 /var/tmp/spdk-nbd.sock 00:09:27.214 16:25:03 -- common/autotest_common.sh@819 -- # '[' -z 106604 ']' 00:09:27.214 16:25:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:27.214 16:25:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:27.214 16:25:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:27.214 16:25:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:27.214 16:25:03 -- common/autotest_common.sh@10 -- # set +x 00:09:27.472 16:25:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:27.472 16:25:04 -- common/autotest_common.sh@852 -- # return 0 00:09:27.472 16:25:04 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:28.038 Malloc0 00:09:28.038 16:25:04 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:28.297 Malloc1 00:09:28.297 16:25:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@12 -- # local i 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.297 16:25:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:28.555 /dev/nbd0 00:09:28.555 16:25:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:28.555 16:25:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:28.555 16:25:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:28.555 16:25:05 -- common/autotest_common.sh@857 -- # local i 00:09:28.555 16:25:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:28.555 16:25:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:28.555 16:25:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:28.555 16:25:05 -- common/autotest_common.sh@861 -- # break 00:09:28.555 16:25:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:28.555 16:25:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:28.555 16:25:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:28.555 1+0 records in 00:09:28.555 1+0 records out 00:09:28.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551138 s, 7.4 MB/s 00:09:28.555 16:25:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.555 16:25:05 -- common/autotest_common.sh@874 -- # size=4096 00:09:28.555 16:25:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.555 16:25:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:28.555 16:25:05 -- common/autotest_common.sh@877 -- # return 0 00:09:28.555 16:25:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.555 16:25:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.555 16:25:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:28.813 /dev/nbd1 00:09:28.813 16:25:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:28.813 16:25:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:28.813 16:25:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:28.813 16:25:05 -- common/autotest_common.sh@857 -- # local i 00:09:28.813 16:25:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:28.813 16:25:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:28.813 16:25:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:28.813 16:25:05 -- common/autotest_common.sh@861 -- # break 00:09:28.813 16:25:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:28.813 16:25:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:28.813 16:25:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:28.813 1+0 records in 00:09:28.813 1+0 records out 00:09:28.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266427 s, 15.4 MB/s 00:09:28.813 16:25:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.813 16:25:05 -- common/autotest_common.sh@874 -- # size=4096 00:09:28.813 16:25:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:28.813 16:25:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:28.813 16:25:05 -- common/autotest_common.sh@877 -- # return 0 00:09:28.813 16:25:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.813 16:25:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:28.813 16:25:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:28.813 16:25:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.813 16:25:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:29.070 { 00:09:29.070 "nbd_device": "/dev/nbd0", 00:09:29.070 "bdev_name": "Malloc0" 00:09:29.070 }, 00:09:29.070 { 00:09:29.070 "nbd_device": "/dev/nbd1", 00:09:29.070 "bdev_name": "Malloc1" 00:09:29.070 } 00:09:29.070 ]' 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:29.070 { 00:09:29.070 "nbd_device": "/dev/nbd0", 00:09:29.070 "bdev_name": "Malloc0" 00:09:29.070 }, 00:09:29.070 { 00:09:29.070 "nbd_device": "/dev/nbd1", 00:09:29.070 "bdev_name": "Malloc1" 00:09:29.070 } 00:09:29.070 ]' 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:29.070 /dev/nbd1' 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:29.070 /dev/nbd1' 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@65 -- # count=2 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@95 -- # count=2 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:29.070 16:25:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:29.327 256+0 records in 00:09:29.327 256+0 records out 00:09:29.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00808167 s, 130 MB/s 00:09:29.327 16:25:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.327 16:25:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:29.327 256+0 records in 00:09:29.327 256+0 records out 00:09:29.327 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026509 s, 39.6 MB/s 00:09:29.327 16:25:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:29.327 16:25:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:29.327 256+0 records in 00:09:29.327 256+0 records out 00:09:29.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0338511 s, 31.0 MB/s 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@51 -- # local i 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.328 16:25:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@41 -- # break 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.585 16:25:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@41 -- # break 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.843 16:25:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:30.100 16:25:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:30.100 16:25:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:30.100 16:25:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:30.358 16:25:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:30.358 16:25:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:30.358 16:25:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:30.358 16:25:06 -- bdev/nbd_common.sh@65 -- # true 00:09:30.358 16:25:06 -- bdev/nbd_common.sh@65 -- # count=0 00:09:30.358 16:25:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:30.358 16:25:06 -- bdev/nbd_common.sh@104 -- # count=0 00:09:30.358 16:25:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:30.358 16:25:06 -- bdev/nbd_common.sh@109 -- # return 0 00:09:30.358 16:25:06 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:30.615 16:25:07 -- event/event.sh@35 -- # sleep 3 00:09:31.547 [2024-07-11 16:25:08.207536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:31.805 [2024-07-11 16:25:08.365625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.805 [2024-07-11 16:25:08.365631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.805 [2024-07-11 16:25:08.533926] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:31.805 [2024-07-11 16:25:08.534076] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:33.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:33.704 16:25:10 -- event/event.sh@38 -- # waitforlisten 106604 /var/tmp/spdk-nbd.sock 00:09:33.704 16:25:10 -- common/autotest_common.sh@819 -- # '[' -z 106604 ']' 00:09:33.704 16:25:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:33.704 16:25:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:33.704 16:25:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:33.704 16:25:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:33.704 16:25:10 -- common/autotest_common.sh@10 -- # set +x 00:09:33.704 16:25:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:33.704 16:25:10 -- common/autotest_common.sh@852 -- # return 0 00:09:33.704 16:25:10 -- event/event.sh@39 -- # killprocess 106604 00:09:33.704 16:25:10 -- common/autotest_common.sh@926 -- # '[' -z 106604 ']' 00:09:33.704 16:25:10 -- common/autotest_common.sh@930 -- # kill -0 106604 00:09:33.704 16:25:10 -- common/autotest_common.sh@931 -- # uname 00:09:33.704 16:25:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:33.704 16:25:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 106604 00:09:33.962 killing process with pid 106604 00:09:33.962 16:25:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:33.962 16:25:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:33.962 16:25:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 106604' 00:09:33.962 16:25:10 -- common/autotest_common.sh@945 -- # kill 106604 00:09:33.962 16:25:10 -- common/autotest_common.sh@950 -- # wait 106604 00:09:34.897 spdk_app_start is called in Round 0. 00:09:34.897 Shutdown signal received, stop current app iteration 00:09:34.897 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:34.897 spdk_app_start is called in Round 1. 00:09:34.897 Shutdown signal received, stop current app iteration 00:09:34.897 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:34.897 spdk_app_start is called in Round 2. 00:09:34.897 Shutdown signal received, stop current app iteration 00:09:34.897 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:09:34.897 spdk_app_start is called in Round 3. 00:09:34.897 Shutdown signal received, stop current app iteration 00:09:34.897 ************************************ 00:09:34.897 END TEST app_repeat 00:09:34.897 ************************************ 00:09:34.897 16:25:11 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:34.897 16:25:11 -- event/event.sh@42 -- # return 0 00:09:34.897 00:09:34.897 real 0m20.840s 00:09:34.897 user 0m44.790s 00:09:34.897 sys 0m2.929s 00:09:34.897 16:25:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.897 16:25:11 -- common/autotest_common.sh@10 -- # set +x 00:09:34.897 16:25:11 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:34.897 16:25:11 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:34.897 16:25:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:34.897 16:25:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:34.897 16:25:11 -- common/autotest_common.sh@10 -- # set +x 00:09:34.897 ************************************ 00:09:34.897 START TEST cpu_locks 00:09:34.897 ************************************ 00:09:34.897 16:25:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:34.897 * Looking for test storage... 00:09:34.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:34.897 16:25:11 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:34.897 16:25:11 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:34.897 16:25:11 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:34.897 16:25:11 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:34.897 16:25:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:34.897 16:25:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:34.897 16:25:11 -- common/autotest_common.sh@10 -- # set +x 00:09:34.897 ************************************ 00:09:34.897 START TEST default_locks 00:09:34.897 ************************************ 00:09:34.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.897 16:25:11 -- common/autotest_common.sh@1104 -- # default_locks 00:09:34.897 16:25:11 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=107187 00:09:34.897 16:25:11 -- event/cpu_locks.sh@47 -- # waitforlisten 107187 00:09:34.897 16:25:11 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:34.897 16:25:11 -- common/autotest_common.sh@819 -- # '[' -z 107187 ']' 00:09:34.897 16:25:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.897 16:25:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:34.897 16:25:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.897 16:25:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:34.897 16:25:11 -- common/autotest_common.sh@10 -- # set +x 00:09:34.897 [2024-07-11 16:25:11.624545] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:34.897 [2024-07-11 16:25:11.624726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107187 ] 00:09:35.156 [2024-07-11 16:25:11.790627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.156 [2024-07-11 16:25:11.951908] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:35.156 [2024-07-11 16:25:11.952112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.532 16:25:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:36.532 16:25:13 -- common/autotest_common.sh@852 -- # return 0 00:09:36.532 16:25:13 -- event/cpu_locks.sh@49 -- # locks_exist 107187 00:09:36.532 16:25:13 -- event/cpu_locks.sh@22 -- # lslocks -p 107187 00:09:36.532 16:25:13 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:36.791 16:25:13 -- event/cpu_locks.sh@50 -- # killprocess 107187 00:09:36.791 16:25:13 -- common/autotest_common.sh@926 -- # '[' -z 107187 ']' 00:09:36.791 16:25:13 -- common/autotest_common.sh@930 -- # kill -0 107187 00:09:36.791 16:25:13 -- common/autotest_common.sh@931 -- # uname 00:09:36.791 16:25:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:36.791 16:25:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107187 00:09:36.791 killing process with pid 107187 00:09:36.791 16:25:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:36.791 16:25:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:36.791 16:25:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107187' 00:09:36.791 16:25:13 -- common/autotest_common.sh@945 -- # kill 107187 00:09:36.791 16:25:13 -- common/autotest_common.sh@950 -- # wait 107187 00:09:38.692 16:25:15 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 107187 00:09:38.692 16:25:15 -- common/autotest_common.sh@640 -- # local es=0 00:09:38.692 16:25:15 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107187 00:09:38.692 16:25:15 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:09:38.692 16:25:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:38.692 16:25:15 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:09:38.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.692 ERROR: process (pid: 107187) is no longer running 00:09:38.692 ************************************ 00:09:38.692 END TEST default_locks 00:09:38.692 ************************************ 00:09:38.692 16:25:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:38.692 16:25:15 -- common/autotest_common.sh@643 -- # waitforlisten 107187 00:09:38.692 16:25:15 -- common/autotest_common.sh@819 -- # '[' -z 107187 ']' 00:09:38.692 16:25:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.692 16:25:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.692 16:25:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.692 16:25:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.692 16:25:15 -- common/autotest_common.sh@10 -- # set +x 00:09:38.692 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107187) - No such process 00:09:38.692 16:25:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:38.692 16:25:15 -- common/autotest_common.sh@852 -- # return 1 00:09:38.692 16:25:15 -- common/autotest_common.sh@643 -- # es=1 00:09:38.692 16:25:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:38.692 16:25:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:38.692 16:25:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:38.692 16:25:15 -- event/cpu_locks.sh@54 -- # no_locks 00:09:38.692 16:25:15 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:09:38.692 16:25:15 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:38.692 16:25:15 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:38.692 00:09:38.692 real 0m3.739s 00:09:38.692 user 0m3.924s 00:09:38.692 sys 0m0.604s 00:09:38.692 16:25:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.692 16:25:15 -- common/autotest_common.sh@10 -- # set +x 00:09:38.692 16:25:15 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:38.693 16:25:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:38.693 16:25:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:38.693 16:25:15 -- common/autotest_common.sh@10 -- # set +x 00:09:38.693 ************************************ 00:09:38.693 START TEST default_locks_via_rpc 00:09:38.693 ************************************ 00:09:38.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.693 16:25:15 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:09:38.693 16:25:15 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=107291 00:09:38.693 16:25:15 -- event/cpu_locks.sh@63 -- # waitforlisten 107291 00:09:38.693 16:25:15 -- common/autotest_common.sh@819 -- # '[' -z 107291 ']' 00:09:38.693 16:25:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.693 16:25:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:38.693 16:25:15 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:38.693 16:25:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.693 16:25:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:38.693 16:25:15 -- common/autotest_common.sh@10 -- # set +x 00:09:38.693 [2024-07-11 16:25:15.424350] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:38.693 [2024-07-11 16:25:15.424561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107291 ] 00:09:38.951 [2024-07-11 16:25:15.594587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.209 [2024-07-11 16:25:15.757977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:39.209 [2024-07-11 16:25:15.758181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.146 16:25:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:40.146 16:25:16 -- common/autotest_common.sh@852 -- # return 0 00:09:40.146 16:25:16 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:40.146 16:25:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.146 16:25:16 -- common/autotest_common.sh@10 -- # set +x 00:09:40.404 16:25:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.404 16:25:16 -- event/cpu_locks.sh@67 -- # no_locks 00:09:40.404 16:25:16 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:09:40.404 16:25:16 -- event/cpu_locks.sh@26 -- # local lock_files 00:09:40.404 16:25:16 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:40.405 16:25:16 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:40.405 16:25:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:40.405 16:25:16 -- common/autotest_common.sh@10 -- # set +x 00:09:40.405 16:25:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:40.405 16:25:16 -- event/cpu_locks.sh@71 -- # locks_exist 107291 00:09:40.405 16:25:16 -- event/cpu_locks.sh@22 -- # lslocks -p 107291 00:09:40.405 16:25:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:40.405 16:25:17 -- event/cpu_locks.sh@73 -- # killprocess 107291 00:09:40.405 16:25:17 -- common/autotest_common.sh@926 -- # '[' -z 107291 ']' 00:09:40.405 16:25:17 -- common/autotest_common.sh@930 -- # kill -0 107291 00:09:40.405 16:25:17 -- common/autotest_common.sh@931 -- # uname 00:09:40.405 16:25:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:40.405 16:25:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107291 00:09:40.405 killing process with pid 107291 00:09:40.405 16:25:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:40.405 16:25:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:40.405 16:25:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107291' 00:09:40.405 16:25:17 -- common/autotest_common.sh@945 -- # kill 107291 00:09:40.405 16:25:17 -- common/autotest_common.sh@950 -- # wait 107291 00:09:42.307 ************************************ 00:09:42.307 END TEST default_locks_via_rpc 00:09:42.307 ************************************ 00:09:42.307 00:09:42.307 real 0m3.597s 00:09:42.307 user 0m3.667s 00:09:42.307 sys 0m0.601s 00:09:42.307 16:25:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:42.307 16:25:18 -- common/autotest_common.sh@10 -- # set +x 00:09:42.307 16:25:18 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:42.307 16:25:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:42.307 16:25:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:42.307 16:25:18 -- common/autotest_common.sh@10 -- # set +x 00:09:42.307 ************************************ 00:09:42.307 START TEST non_locking_app_on_locked_coremask 00:09:42.307 ************************************ 00:09:42.307 16:25:18 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:09:42.307 16:25:18 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=107365 00:09:42.307 16:25:18 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:42.307 16:25:18 -- event/cpu_locks.sh@81 -- # waitforlisten 107365 /var/tmp/spdk.sock 00:09:42.307 16:25:18 -- common/autotest_common.sh@819 -- # '[' -z 107365 ']' 00:09:42.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.307 16:25:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.307 16:25:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:42.307 16:25:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.307 16:25:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:42.307 16:25:18 -- common/autotest_common.sh@10 -- # set +x 00:09:42.307 [2024-07-11 16:25:19.052632] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:42.307 [2024-07-11 16:25:19.052802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107365 ] 00:09:42.566 [2024-07-11 16:25:19.200807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.566 [2024-07-11 16:25:19.365163] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:42.566 [2024-07-11 16:25:19.365411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.941 16:25:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:43.941 16:25:20 -- common/autotest_common.sh@852 -- # return 0 00:09:43.941 16:25:20 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=107400 00:09:43.941 16:25:20 -- event/cpu_locks.sh@85 -- # waitforlisten 107400 /var/tmp/spdk2.sock 00:09:43.941 16:25:20 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:43.941 16:25:20 -- common/autotest_common.sh@819 -- # '[' -z 107400 ']' 00:09:43.941 16:25:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:43.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:43.941 16:25:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:43.941 16:25:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:43.941 16:25:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:43.941 16:25:20 -- common/autotest_common.sh@10 -- # set +x 00:09:43.941 [2024-07-11 16:25:20.686160] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:43.941 [2024-07-11 16:25:20.686347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107400 ] 00:09:44.200 [2024-07-11 16:25:20.844250] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:44.200 [2024-07-11 16:25:20.844350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.459 [2024-07-11 16:25:21.160538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:44.459 [2024-07-11 16:25:21.160818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.362 16:25:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:46.362 16:25:22 -- common/autotest_common.sh@852 -- # return 0 00:09:46.362 16:25:22 -- event/cpu_locks.sh@87 -- # locks_exist 107365 00:09:46.362 16:25:22 -- event/cpu_locks.sh@22 -- # lslocks -p 107365 00:09:46.362 16:25:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:46.619 16:25:23 -- event/cpu_locks.sh@89 -- # killprocess 107365 00:09:46.619 16:25:23 -- common/autotest_common.sh@926 -- # '[' -z 107365 ']' 00:09:46.619 16:25:23 -- common/autotest_common.sh@930 -- # kill -0 107365 00:09:46.619 16:25:23 -- common/autotest_common.sh@931 -- # uname 00:09:46.619 16:25:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:46.619 16:25:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107365 00:09:46.619 16:25:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:46.619 16:25:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:46.619 killing process with pid 107365 00:09:46.619 16:25:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107365' 00:09:46.619 16:25:23 -- common/autotest_common.sh@945 -- # kill 107365 00:09:46.619 16:25:23 -- common/autotest_common.sh@950 -- # wait 107365 00:09:50.812 16:25:27 -- event/cpu_locks.sh@90 -- # killprocess 107400 00:09:50.812 16:25:27 -- common/autotest_common.sh@926 -- # '[' -z 107400 ']' 00:09:50.812 16:25:27 -- common/autotest_common.sh@930 -- # kill -0 107400 00:09:50.812 16:25:27 -- common/autotest_common.sh@931 -- # uname 00:09:50.812 16:25:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:50.812 16:25:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107400 00:09:50.812 16:25:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:50.812 killing process with pid 107400 00:09:50.812 16:25:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:50.812 16:25:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107400' 00:09:50.812 16:25:27 -- common/autotest_common.sh@945 -- # kill 107400 00:09:50.812 16:25:27 -- common/autotest_common.sh@950 -- # wait 107400 00:09:52.714 00:09:52.714 real 0m10.227s 00:09:52.714 user 0m10.853s 00:09:52.714 sys 0m1.128s 00:09:52.714 16:25:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.714 ************************************ 00:09:52.714 END TEST non_locking_app_on_locked_coremask 00:09:52.714 ************************************ 00:09:52.714 16:25:29 -- common/autotest_common.sh@10 -- # set +x 00:09:52.714 16:25:29 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:52.714 16:25:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:52.714 16:25:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.714 16:25:29 -- common/autotest_common.sh@10 -- # set +x 00:09:52.714 ************************************ 00:09:52.714 START TEST locking_app_on_unlocked_coremask 00:09:52.714 ************************************ 00:09:52.714 16:25:29 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:09:52.714 16:25:29 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=107557 00:09:52.714 16:25:29 -- event/cpu_locks.sh@99 -- # waitforlisten 107557 /var/tmp/spdk.sock 00:09:52.714 16:25:29 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:52.714 16:25:29 -- common/autotest_common.sh@819 -- # '[' -z 107557 ']' 00:09:52.714 16:25:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.714 16:25:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:52.714 16:25:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.714 16:25:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:52.714 16:25:29 -- common/autotest_common.sh@10 -- # set +x 00:09:52.714 [2024-07-11 16:25:29.347239] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:52.714 [2024-07-11 16:25:29.347435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107557 ] 00:09:52.714 [2024-07-11 16:25:29.513342] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:52.714 [2024-07-11 16:25:29.513425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.973 [2024-07-11 16:25:29.682083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:52.973 [2024-07-11 16:25:29.682347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.348 16:25:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:54.348 16:25:30 -- common/autotest_common.sh@852 -- # return 0 00:09:54.348 16:25:30 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=107587 00:09:54.348 16:25:30 -- event/cpu_locks.sh@103 -- # waitforlisten 107587 /var/tmp/spdk2.sock 00:09:54.348 16:25:30 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:54.348 16:25:30 -- common/autotest_common.sh@819 -- # '[' -z 107587 ']' 00:09:54.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:54.348 16:25:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:54.348 16:25:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:54.348 16:25:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:54.348 16:25:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:54.348 16:25:30 -- common/autotest_common.sh@10 -- # set +x 00:09:54.348 [2024-07-11 16:25:30.983856] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:09:54.348 [2024-07-11 16:25:30.984066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107587 ] 00:09:54.606 [2024-07-11 16:25:31.167675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.864 [2024-07-11 16:25:31.651704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:54.864 [2024-07-11 16:25:31.651948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.840 16:25:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:56.840 16:25:33 -- common/autotest_common.sh@852 -- # return 0 00:09:56.840 16:25:33 -- event/cpu_locks.sh@105 -- # locks_exist 107587 00:09:56.840 16:25:33 -- event/cpu_locks.sh@22 -- # lslocks -p 107587 00:09:56.840 16:25:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:56.840 16:25:33 -- event/cpu_locks.sh@107 -- # killprocess 107557 00:09:56.840 16:25:33 -- common/autotest_common.sh@926 -- # '[' -z 107557 ']' 00:09:56.841 16:25:33 -- common/autotest_common.sh@930 -- # kill -0 107557 00:09:56.841 16:25:33 -- common/autotest_common.sh@931 -- # uname 00:09:56.841 16:25:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:56.841 16:25:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107557 00:09:56.841 16:25:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:56.841 killing process with pid 107557 00:09:56.841 16:25:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:56.841 16:25:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107557' 00:09:56.841 16:25:33 -- common/autotest_common.sh@945 -- # kill 107557 00:09:56.841 16:25:33 -- common/autotest_common.sh@950 -- # wait 107557 00:10:01.029 16:25:37 -- event/cpu_locks.sh@108 -- # killprocess 107587 00:10:01.029 16:25:37 -- common/autotest_common.sh@926 -- # '[' -z 107587 ']' 00:10:01.029 16:25:37 -- common/autotest_common.sh@930 -- # kill -0 107587 00:10:01.029 16:25:37 -- common/autotest_common.sh@931 -- # uname 00:10:01.029 16:25:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:01.029 16:25:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107587 00:10:01.029 16:25:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:01.029 16:25:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:01.029 killing process with pid 107587 00:10:01.029 16:25:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107587' 00:10:01.029 16:25:37 -- common/autotest_common.sh@945 -- # kill 107587 00:10:01.029 16:25:37 -- common/autotest_common.sh@950 -- # wait 107587 00:10:02.932 00:10:02.932 real 0m10.276s 00:10:02.932 user 0m10.838s 00:10:02.932 sys 0m1.276s 00:10:02.932 16:25:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.932 ************************************ 00:10:02.932 END TEST locking_app_on_unlocked_coremask 00:10:02.932 ************************************ 00:10:02.932 16:25:39 -- common/autotest_common.sh@10 -- # set +x 00:10:02.932 16:25:39 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:02.932 16:25:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:02.932 16:25:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:02.932 16:25:39 -- common/autotest_common.sh@10 -- # set +x 00:10:02.932 ************************************ 00:10:02.932 START TEST locking_app_on_locked_coremask 00:10:02.932 ************************************ 00:10:02.932 16:25:39 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:10:02.932 16:25:39 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=107742 00:10:02.932 16:25:39 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:02.933 16:25:39 -- event/cpu_locks.sh@116 -- # waitforlisten 107742 /var/tmp/spdk.sock 00:10:02.933 16:25:39 -- common/autotest_common.sh@819 -- # '[' -z 107742 ']' 00:10:02.933 16:25:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.933 16:25:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:02.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.933 16:25:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.933 16:25:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:02.933 16:25:39 -- common/autotest_common.sh@10 -- # set +x 00:10:02.933 [2024-07-11 16:25:39.670273] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:02.933 [2024-07-11 16:25:39.670453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107742 ] 00:10:03.192 [2024-07-11 16:25:39.822636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.192 [2024-07-11 16:25:39.993673] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:03.192 [2024-07-11 16:25:39.993913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.568 16:25:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:04.568 16:25:41 -- common/autotest_common.sh@852 -- # return 0 00:10:04.568 16:25:41 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=107777 00:10:04.568 16:25:41 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 107777 /var/tmp/spdk2.sock 00:10:04.568 16:25:41 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:04.568 16:25:41 -- common/autotest_common.sh@640 -- # local es=0 00:10:04.568 16:25:41 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107777 /var/tmp/spdk2.sock 00:10:04.568 16:25:41 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:04.568 16:25:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:04.568 16:25:41 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:04.568 16:25:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:04.568 16:25:41 -- common/autotest_common.sh@643 -- # waitforlisten 107777 /var/tmp/spdk2.sock 00:10:04.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:04.568 16:25:41 -- common/autotest_common.sh@819 -- # '[' -z 107777 ']' 00:10:04.568 16:25:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:04.568 16:25:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:04.568 16:25:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:04.568 16:25:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:04.568 16:25:41 -- common/autotest_common.sh@10 -- # set +x 00:10:04.827 [2024-07-11 16:25:41.387361] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:04.827 [2024-07-11 16:25:41.387559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107777 ] 00:10:04.827 [2024-07-11 16:25:41.552807] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 107742 has claimed it. 00:10:04.827 [2024-07-11 16:25:41.552909] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:05.395 ERROR: process (pid: 107777) is no longer running 00:10:05.395 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107777) - No such process 00:10:05.395 16:25:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:05.395 16:25:42 -- common/autotest_common.sh@852 -- # return 1 00:10:05.395 16:25:42 -- common/autotest_common.sh@643 -- # es=1 00:10:05.395 16:25:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:05.395 16:25:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:05.395 16:25:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:05.395 16:25:42 -- event/cpu_locks.sh@122 -- # locks_exist 107742 00:10:05.395 16:25:42 -- event/cpu_locks.sh@22 -- # lslocks -p 107742 00:10:05.395 16:25:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:05.654 16:25:42 -- event/cpu_locks.sh@124 -- # killprocess 107742 00:10:05.654 16:25:42 -- common/autotest_common.sh@926 -- # '[' -z 107742 ']' 00:10:05.654 16:25:42 -- common/autotest_common.sh@930 -- # kill -0 107742 00:10:05.654 16:25:42 -- common/autotest_common.sh@931 -- # uname 00:10:05.654 16:25:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:05.654 16:25:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107742 00:10:05.654 16:25:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:05.654 16:25:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:05.654 killing process with pid 107742 00:10:05.654 16:25:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107742' 00:10:05.654 16:25:42 -- common/autotest_common.sh@945 -- # kill 107742 00:10:05.654 16:25:42 -- common/autotest_common.sh@950 -- # wait 107742 00:10:07.557 00:10:07.557 real 0m4.469s 00:10:07.557 user 0m4.912s 00:10:07.557 sys 0m0.704s 00:10:07.557 16:25:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.557 ************************************ 00:10:07.557 END TEST locking_app_on_locked_coremask 00:10:07.557 ************************************ 00:10:07.557 16:25:44 -- common/autotest_common.sh@10 -- # set +x 00:10:07.557 16:25:44 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:07.557 16:25:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:07.557 16:25:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:07.557 16:25:44 -- common/autotest_common.sh@10 -- # set +x 00:10:07.557 ************************************ 00:10:07.557 START TEST locking_overlapped_coremask 00:10:07.557 ************************************ 00:10:07.557 16:25:44 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:10:07.557 16:25:44 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=107847 00:10:07.557 16:25:44 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:07.557 16:25:44 -- event/cpu_locks.sh@133 -- # waitforlisten 107847 /var/tmp/spdk.sock 00:10:07.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.557 16:25:44 -- common/autotest_common.sh@819 -- # '[' -z 107847 ']' 00:10:07.557 16:25:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.557 16:25:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:07.557 16:25:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.557 16:25:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:07.557 16:25:44 -- common/autotest_common.sh@10 -- # set +x 00:10:07.557 [2024-07-11 16:25:44.204599] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:07.557 [2024-07-11 16:25:44.204799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107847 ] 00:10:07.816 [2024-07-11 16:25:44.379296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.816 [2024-07-11 16:25:44.553506] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:07.816 [2024-07-11 16:25:44.553919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.816 [2024-07-11 16:25:44.554136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.816 [2024-07-11 16:25:44.554150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.194 16:25:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:09.194 16:25:45 -- common/autotest_common.sh@852 -- # return 0 00:10:09.194 16:25:45 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=107892 00:10:09.194 16:25:45 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 107892 /var/tmp/spdk2.sock 00:10:09.194 16:25:45 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:09.194 16:25:45 -- common/autotest_common.sh@640 -- # local es=0 00:10:09.194 16:25:45 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 107892 /var/tmp/spdk2.sock 00:10:09.194 16:25:45 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:10:09.194 16:25:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:09.194 16:25:45 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:10:09.194 16:25:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:09.194 16:25:45 -- common/autotest_common.sh@643 -- # waitforlisten 107892 /var/tmp/spdk2.sock 00:10:09.194 16:25:45 -- common/autotest_common.sh@819 -- # '[' -z 107892 ']' 00:10:09.194 16:25:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:09.194 16:25:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:09.194 16:25:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:09.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:09.194 16:25:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:09.194 16:25:45 -- common/autotest_common.sh@10 -- # set +x 00:10:09.194 [2024-07-11 16:25:45.894851] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:09.194 [2024-07-11 16:25:45.895061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107892 ] 00:10:09.453 [2024-07-11 16:25:46.077894] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 107847 has claimed it. 00:10:09.453 [2024-07-11 16:25:46.077985] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:10.021 ERROR: process (pid: 107892) is no longer running 00:10:10.021 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (107892) - No such process 00:10:10.021 16:25:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:10.021 16:25:46 -- common/autotest_common.sh@852 -- # return 1 00:10:10.021 16:25:46 -- common/autotest_common.sh@643 -- # es=1 00:10:10.021 16:25:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:10.021 16:25:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:10.021 16:25:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:10.021 16:25:46 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:10.021 16:25:46 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:10.021 16:25:46 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:10.021 16:25:46 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:10.021 16:25:46 -- event/cpu_locks.sh@141 -- # killprocess 107847 00:10:10.021 16:25:46 -- common/autotest_common.sh@926 -- # '[' -z 107847 ']' 00:10:10.021 16:25:46 -- common/autotest_common.sh@930 -- # kill -0 107847 00:10:10.021 16:25:46 -- common/autotest_common.sh@931 -- # uname 00:10:10.021 16:25:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:10.021 16:25:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107847 00:10:10.021 16:25:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:10.021 16:25:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:10.021 killing process with pid 107847 00:10:10.021 16:25:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107847' 00:10:10.021 16:25:46 -- common/autotest_common.sh@945 -- # kill 107847 00:10:10.021 16:25:46 -- common/autotest_common.sh@950 -- # wait 107847 00:10:11.923 00:10:11.923 real 0m4.359s 00:10:11.923 user 0m11.876s 00:10:11.923 sys 0m0.636s 00:10:11.923 16:25:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.923 ************************************ 00:10:11.923 END TEST locking_overlapped_coremask 00:10:11.923 ************************************ 00:10:11.923 16:25:48 -- common/autotest_common.sh@10 -- # set +x 00:10:11.923 16:25:48 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:11.923 16:25:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:11.923 16:25:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:11.923 16:25:48 -- common/autotest_common.sh@10 -- # set +x 00:10:11.923 ************************************ 00:10:11.923 START TEST locking_overlapped_coremask_via_rpc 00:10:11.923 ************************************ 00:10:11.923 16:25:48 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:10:11.923 16:25:48 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=107960 00:10:11.923 16:25:48 -- event/cpu_locks.sh@149 -- # waitforlisten 107960 /var/tmp/spdk.sock 00:10:11.923 16:25:48 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:11.924 16:25:48 -- common/autotest_common.sh@819 -- # '[' -z 107960 ']' 00:10:11.924 16:25:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.924 16:25:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:11.924 16:25:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.924 16:25:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:11.924 16:25:48 -- common/autotest_common.sh@10 -- # set +x 00:10:11.924 [2024-07-11 16:25:48.599105] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:11.924 [2024-07-11 16:25:48.599277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107960 ] 00:10:12.182 [2024-07-11 16:25:48.760428] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:12.182 [2024-07-11 16:25:48.760500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.182 [2024-07-11 16:25:48.941035] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:12.182 [2024-07-11 16:25:48.941441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.182 [2024-07-11 16:25:48.941546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.182 [2024-07-11 16:25:48.941547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.556 16:25:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:13.556 16:25:50 -- common/autotest_common.sh@852 -- # return 0 00:10:13.556 16:25:50 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:13.556 16:25:50 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=107991 00:10:13.556 16:25:50 -- event/cpu_locks.sh@153 -- # waitforlisten 107991 /var/tmp/spdk2.sock 00:10:13.556 16:25:50 -- common/autotest_common.sh@819 -- # '[' -z 107991 ']' 00:10:13.556 16:25:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:13.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:13.556 16:25:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:13.556 16:25:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:13.556 16:25:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:13.556 16:25:50 -- common/autotest_common.sh@10 -- # set +x 00:10:13.556 [2024-07-11 16:25:50.277438] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:13.556 [2024-07-11 16:25:50.277615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107991 ] 00:10:13.814 [2024-07-11 16:25:50.452014] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:13.814 [2024-07-11 16:25:50.452096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.071 [2024-07-11 16:25:50.867585] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:14.071 [2024-07-11 16:25:50.868267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.328 [2024-07-11 16:25:50.881173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:14.328 [2024-07-11 16:25:50.881175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.274 16:25:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:16.274 16:25:52 -- common/autotest_common.sh@852 -- # return 0 00:10:16.274 16:25:52 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:16.274 16:25:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.274 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:10:16.274 16:25:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:16.274 16:25:52 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.274 16:25:52 -- common/autotest_common.sh@640 -- # local es=0 00:10:16.274 16:25:52 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.274 16:25:52 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:10:16.274 16:25:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:16.274 16:25:52 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:10:16.274 16:25:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:16.274 16:25:52 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:16.274 16:25:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:16.274 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:10:16.274 [2024-07-11 16:25:52.629240] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 107960 has claimed it. 00:10:16.274 request: 00:10:16.274 { 00:10:16.274 "method": "framework_enable_cpumask_locks", 00:10:16.274 "req_id": 1 00:10:16.274 } 00:10:16.274 Got JSON-RPC error response 00:10:16.274 response: 00:10:16.274 { 00:10:16.274 "code": -32603, 00:10:16.274 "message": "Failed to claim CPU core: 2" 00:10:16.274 } 00:10:16.274 16:25:52 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:10:16.274 16:25:52 -- common/autotest_common.sh@643 -- # es=1 00:10:16.274 16:25:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:16.274 16:25:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:16.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.274 16:25:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:16.274 16:25:52 -- event/cpu_locks.sh@158 -- # waitforlisten 107960 /var/tmp/spdk.sock 00:10:16.274 16:25:52 -- common/autotest_common.sh@819 -- # '[' -z 107960 ']' 00:10:16.274 16:25:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.274 16:25:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:16.274 16:25:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.274 16:25:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:16.274 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:10:16.274 16:25:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:16.274 16:25:52 -- common/autotest_common.sh@852 -- # return 0 00:10:16.274 16:25:52 -- event/cpu_locks.sh@159 -- # waitforlisten 107991 /var/tmp/spdk2.sock 00:10:16.274 16:25:52 -- common/autotest_common.sh@819 -- # '[' -z 107991 ']' 00:10:16.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:16.274 16:25:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:16.274 16:25:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:16.274 16:25:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:16.274 16:25:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:16.274 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:10:16.532 16:25:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:16.532 16:25:53 -- common/autotest_common.sh@852 -- # return 0 00:10:16.532 16:25:53 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:16.532 16:25:53 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:16.532 ************************************ 00:10:16.532 END TEST locking_overlapped_coremask_via_rpc 00:10:16.532 ************************************ 00:10:16.532 16:25:53 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:16.532 16:25:53 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:16.532 00:10:16.532 real 0m4.551s 00:10:16.532 user 0m1.673s 00:10:16.532 sys 0m0.306s 00:10:16.532 16:25:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:16.532 16:25:53 -- common/autotest_common.sh@10 -- # set +x 00:10:16.532 16:25:53 -- event/cpu_locks.sh@174 -- # cleanup 00:10:16.532 16:25:53 -- event/cpu_locks.sh@15 -- # [[ -z 107960 ]] 00:10:16.532 16:25:53 -- event/cpu_locks.sh@15 -- # killprocess 107960 00:10:16.532 16:25:53 -- common/autotest_common.sh@926 -- # '[' -z 107960 ']' 00:10:16.532 16:25:53 -- common/autotest_common.sh@930 -- # kill -0 107960 00:10:16.532 16:25:53 -- common/autotest_common.sh@931 -- # uname 00:10:16.532 16:25:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:16.532 16:25:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107960 00:10:16.532 16:25:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:16.532 killing process with pid 107960 00:10:16.532 16:25:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:16.532 16:25:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107960' 00:10:16.532 16:25:53 -- common/autotest_common.sh@945 -- # kill 107960 00:10:16.532 16:25:53 -- common/autotest_common.sh@950 -- # wait 107960 00:10:18.427 16:25:55 -- event/cpu_locks.sh@16 -- # [[ -z 107991 ]] 00:10:18.427 16:25:55 -- event/cpu_locks.sh@16 -- # killprocess 107991 00:10:18.427 16:25:55 -- common/autotest_common.sh@926 -- # '[' -z 107991 ']' 00:10:18.427 16:25:55 -- common/autotest_common.sh@930 -- # kill -0 107991 00:10:18.427 16:25:55 -- common/autotest_common.sh@931 -- # uname 00:10:18.427 16:25:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:18.427 16:25:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 107991 00:10:18.427 16:25:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:10:18.427 killing process with pid 107991 00:10:18.427 16:25:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:10:18.427 16:25:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 107991' 00:10:18.427 16:25:55 -- common/autotest_common.sh@945 -- # kill 107991 00:10:18.427 16:25:55 -- common/autotest_common.sh@950 -- # wait 107991 00:10:20.961 16:25:57 -- event/cpu_locks.sh@18 -- # rm -f 00:10:20.961 Process with pid 107960 is not found 00:10:20.961 Process with pid 107991 is not found 00:10:20.961 16:25:57 -- event/cpu_locks.sh@1 -- # cleanup 00:10:20.961 16:25:57 -- event/cpu_locks.sh@15 -- # [[ -z 107960 ]] 00:10:20.961 16:25:57 -- event/cpu_locks.sh@15 -- # killprocess 107960 00:10:20.961 16:25:57 -- common/autotest_common.sh@926 -- # '[' -z 107960 ']' 00:10:20.961 16:25:57 -- common/autotest_common.sh@930 -- # kill -0 107960 00:10:20.961 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (107960) - No such process 00:10:20.961 16:25:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 107960 is not found' 00:10:20.961 16:25:57 -- event/cpu_locks.sh@16 -- # [[ -z 107991 ]] 00:10:20.961 16:25:57 -- event/cpu_locks.sh@16 -- # killprocess 107991 00:10:20.961 16:25:57 -- common/autotest_common.sh@926 -- # '[' -z 107991 ']' 00:10:20.961 16:25:57 -- common/autotest_common.sh@930 -- # kill -0 107991 00:10:20.961 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (107991) - No such process 00:10:20.961 16:25:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 107991 is not found' 00:10:20.961 16:25:57 -- event/cpu_locks.sh@18 -- # rm -f 00:10:20.961 ************************************ 00:10:20.961 END TEST cpu_locks 00:10:20.961 ************************************ 00:10:20.961 00:10:20.961 real 0m45.738s 00:10:20.961 user 1m19.561s 00:10:20.961 sys 0m6.353s 00:10:20.961 16:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.961 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:10:20.961 ************************************ 00:10:20.961 END TEST event 00:10:20.961 ************************************ 00:10:20.961 00:10:20.961 real 1m17.750s 00:10:20.961 user 2m20.984s 00:10:20.961 sys 0m10.342s 00:10:20.961 16:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.961 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:10:20.961 16:25:57 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:20.961 16:25:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:20.961 16:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:20.961 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:10:20.961 ************************************ 00:10:20.961 START TEST thread 00:10:20.961 ************************************ 00:10:20.961 16:25:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:20.961 * Looking for test storage... 00:10:20.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:20.961 16:25:57 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:20.961 16:25:57 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:20.961 16:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:20.961 16:25:57 -- common/autotest_common.sh@10 -- # set +x 00:10:20.961 ************************************ 00:10:20.961 START TEST thread_poller_perf 00:10:20.961 ************************************ 00:10:20.961 16:25:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:20.961 [2024-07-11 16:25:57.415746] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:20.961 [2024-07-11 16:25:57.416147] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108200 ] 00:10:20.961 [2024-07-11 16:25:57.589023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.220 [2024-07-11 16:25:57.826221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.220 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:22.597 ====================================== 00:10:22.597 busy:2212668588 (cyc) 00:10:22.597 total_run_count: 362000 00:10:22.597 tsc_hz: 2200000000 (cyc) 00:10:22.597 ====================================== 00:10:22.597 poller_cost: 6112 (cyc), 2778 (nsec) 00:10:22.597 ************************************ 00:10:22.597 END TEST thread_poller_perf 00:10:22.597 ************************************ 00:10:22.597 00:10:22.597 real 0m1.780s 00:10:22.597 user 0m1.538s 00:10:22.597 sys 0m0.140s 00:10:22.597 16:25:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.597 16:25:59 -- common/autotest_common.sh@10 -- # set +x 00:10:22.597 16:25:59 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:22.597 16:25:59 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:10:22.597 16:25:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:22.597 16:25:59 -- common/autotest_common.sh@10 -- # set +x 00:10:22.597 ************************************ 00:10:22.597 START TEST thread_poller_perf 00:10:22.597 ************************************ 00:10:22.597 16:25:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:22.597 [2024-07-11 16:25:59.235034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:22.597 [2024-07-11 16:25:59.235853] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108245 ] 00:10:22.597 [2024-07-11 16:25:59.402322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.856 [2024-07-11 16:25:59.568235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.856 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:24.233 ====================================== 00:10:24.233 busy:2204662916 (cyc) 00:10:24.233 total_run_count: 4685000 00:10:24.233 tsc_hz: 2200000000 (cyc) 00:10:24.233 ====================================== 00:10:24.233 poller_cost: 470 (cyc), 213 (nsec) 00:10:24.233 ************************************ 00:10:24.233 END TEST thread_poller_perf 00:10:24.233 ************************************ 00:10:24.233 00:10:24.233 real 0m1.686s 00:10:24.233 user 0m1.476s 00:10:24.233 sys 0m0.108s 00:10:24.233 16:26:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.233 16:26:00 -- common/autotest_common.sh@10 -- # set +x 00:10:24.233 16:26:00 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:10:24.233 16:26:00 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:24.233 16:26:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:24.233 16:26:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:24.233 16:26:00 -- common/autotest_common.sh@10 -- # set +x 00:10:24.233 ************************************ 00:10:24.233 START TEST thread_spdk_lock 00:10:24.233 ************************************ 00:10:24.233 16:26:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:10:24.233 [2024-07-11 16:26:00.974482] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:24.233 [2024-07-11 16:26:00.975033] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108286 ] 00:10:24.492 [2024-07-11 16:26:01.150562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:24.751 [2024-07-11 16:26:01.308609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.751 [2024-07-11 16:26:01.308604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.318 [2024-07-11 16:26:01.830217] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:25.318 [2024-07-11 16:26:01.830475] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:10:25.318 [2024-07-11 16:26:01.830641] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x555e37bb9840 00:10:25.318 [2024-07-11 16:26:01.837811] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:25.318 [2024-07-11 16:26:01.838049] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:25.318 [2024-07-11 16:26:01.838185] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:10:25.577 Starting test contend 00:10:25.577 Worker Delay Wait us Hold us Total us 00:10:25.577 0 3 124864 195587 320452 00:10:25.577 1 5 50698 297699 348397 00:10:25.577 PASS test contend 00:10:25.577 Starting test hold_by_poller 00:10:25.577 PASS test hold_by_poller 00:10:25.577 Starting test hold_by_message 00:10:25.577 PASS test hold_by_message 00:10:25.577 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:10:25.577 100014 assertions passed 00:10:25.577 0 assertions failed 00:10:25.577 ************************************ 00:10:25.577 END TEST thread_spdk_lock 00:10:25.577 ************************************ 00:10:25.577 00:10:25.577 real 0m1.230s 00:10:25.577 user 0m1.525s 00:10:25.577 sys 0m0.133s 00:10:25.577 16:26:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.577 16:26:02 -- common/autotest_common.sh@10 -- # set +x 00:10:25.577 ************************************ 00:10:25.577 END TEST thread 00:10:25.577 ************************************ 00:10:25.577 00:10:25.577 real 0m4.910s 00:10:25.577 user 0m4.661s 00:10:25.577 sys 0m0.459s 00:10:25.577 16:26:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.577 16:26:02 -- common/autotest_common.sh@10 -- # set +x 00:10:25.577 16:26:02 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:25.577 16:26:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:25.577 16:26:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:25.577 16:26:02 -- common/autotest_common.sh@10 -- # set +x 00:10:25.577 ************************************ 00:10:25.577 START TEST accel 00:10:25.577 ************************************ 00:10:25.577 16:26:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:10:25.577 * Looking for test storage... 00:10:25.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:25.577 16:26:02 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:10:25.577 16:26:02 -- accel/accel.sh@74 -- # get_expected_opcs 00:10:25.577 16:26:02 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:25.577 16:26:02 -- accel/accel.sh@59 -- # spdk_tgt_pid=108371 00:10:25.577 16:26:02 -- accel/accel.sh@60 -- # waitforlisten 108371 00:10:25.577 16:26:02 -- common/autotest_common.sh@819 -- # '[' -z 108371 ']' 00:10:25.577 16:26:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.577 16:26:02 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:25.577 16:26:02 -- accel/accel.sh@58 -- # build_accel_config 00:10:25.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.577 16:26:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:25.577 16:26:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.577 16:26:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.577 16:26:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:25.577 16:26:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.577 16:26:02 -- common/autotest_common.sh@10 -- # set +x 00:10:25.577 16:26:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.577 16:26:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.577 16:26:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.577 16:26:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.577 16:26:02 -- accel/accel.sh@42 -- # jq -r . 00:10:25.836 [2024-07-11 16:26:02.387610] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:25.836 [2024-07-11 16:26:02.388263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108371 ] 00:10:25.836 [2024-07-11 16:26:02.552128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.105 [2024-07-11 16:26:02.730724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:26.105 [2024-07-11 16:26:02.731125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.491 16:26:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:27.492 16:26:04 -- common/autotest_common.sh@852 -- # return 0 00:10:27.492 16:26:04 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:27.492 16:26:04 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:10:27.492 16:26:04 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:27.492 16:26:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:27.492 16:26:04 -- common/autotest_common.sh@10 -- # set +x 00:10:27.492 16:26:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # IFS== 00:10:27.492 16:26:04 -- accel/accel.sh@64 -- # read -r opc module 00:10:27.492 16:26:04 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:10:27.492 16:26:04 -- accel/accel.sh@67 -- # killprocess 108371 00:10:27.492 16:26:04 -- common/autotest_common.sh@926 -- # '[' -z 108371 ']' 00:10:27.492 16:26:04 -- common/autotest_common.sh@930 -- # kill -0 108371 00:10:27.492 16:26:04 -- common/autotest_common.sh@931 -- # uname 00:10:27.492 16:26:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:27.492 16:26:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 108371 00:10:27.492 16:26:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:27.492 16:26:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:27.492 16:26:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 108371' 00:10:27.492 killing process with pid 108371 00:10:27.492 16:26:04 -- common/autotest_common.sh@945 -- # kill 108371 00:10:27.492 16:26:04 -- common/autotest_common.sh@950 -- # wait 108371 00:10:29.397 16:26:05 -- accel/accel.sh@68 -- # trap - ERR 00:10:29.397 16:26:05 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:10:29.397 16:26:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:29.397 16:26:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.397 16:26:05 -- common/autotest_common.sh@10 -- # set +x 00:10:29.397 16:26:05 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:10:29.397 16:26:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:10:29.397 16:26:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.397 16:26:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.397 16:26:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.397 16:26:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.397 16:26:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.397 16:26:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.397 16:26:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.397 16:26:06 -- accel/accel.sh@42 -- # jq -r . 00:10:29.397 16:26:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.397 16:26:06 -- common/autotest_common.sh@10 -- # set +x 00:10:29.397 16:26:06 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:10:29.397 16:26:06 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:29.397 16:26:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:29.397 16:26:06 -- common/autotest_common.sh@10 -- # set +x 00:10:29.397 ************************************ 00:10:29.397 START TEST accel_missing_filename 00:10:29.397 ************************************ 00:10:29.397 16:26:06 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:10:29.397 16:26:06 -- common/autotest_common.sh@640 -- # local es=0 00:10:29.397 16:26:06 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:10:29.397 16:26:06 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:29.397 16:26:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:29.397 16:26:06 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:29.397 16:26:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:29.397 16:26:06 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:10:29.397 16:26:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:10:29.397 16:26:06 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.397 16:26:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.397 16:26:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.397 16:26:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.397 16:26:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.397 16:26:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.397 16:26:06 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.397 16:26:06 -- accel/accel.sh@42 -- # jq -r . 00:10:29.397 [2024-07-11 16:26:06.165033] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:29.397 [2024-07-11 16:26:06.165953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108483 ] 00:10:29.656 [2024-07-11 16:26:06.333872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.915 [2024-07-11 16:26:06.509877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.915 [2024-07-11 16:26:06.682797] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:30.482 [2024-07-11 16:26:07.088752] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:30.741 A filename is required. 00:10:30.741 ************************************ 00:10:30.741 END TEST accel_missing_filename 00:10:30.741 ************************************ 00:10:30.741 16:26:07 -- common/autotest_common.sh@643 -- # es=234 00:10:30.741 16:26:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:30.741 16:26:07 -- common/autotest_common.sh@652 -- # es=106 00:10:30.741 16:26:07 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:30.741 16:26:07 -- common/autotest_common.sh@660 -- # es=1 00:10:30.741 16:26:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:30.741 00:10:30.741 real 0m1.290s 00:10:30.741 user 0m1.084s 00:10:30.741 sys 0m0.159s 00:10:30.741 16:26:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.741 16:26:07 -- common/autotest_common.sh@10 -- # set +x 00:10:30.741 16:26:07 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:30.741 16:26:07 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:30.741 16:26:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:30.741 16:26:07 -- common/autotest_common.sh@10 -- # set +x 00:10:30.741 ************************************ 00:10:30.741 START TEST accel_compress_verify 00:10:30.741 ************************************ 00:10:30.741 16:26:07 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:30.741 16:26:07 -- common/autotest_common.sh@640 -- # local es=0 00:10:30.741 16:26:07 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:30.741 16:26:07 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:30.741 16:26:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:30.741 16:26:07 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:30.741 16:26:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:30.741 16:26:07 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:30.741 16:26:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:30.741 16:26:07 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.741 16:26:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.741 16:26:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.741 16:26:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.741 16:26:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.741 16:26:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.741 16:26:07 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.741 16:26:07 -- accel/accel.sh@42 -- # jq -r . 00:10:30.741 [2024-07-11 16:26:07.506231] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:30.741 [2024-07-11 16:26:07.506564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108529 ] 00:10:31.000 [2024-07-11 16:26:07.673994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.259 [2024-07-11 16:26:07.832048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.259 [2024-07-11 16:26:08.030042] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:31.826 [2024-07-11 16:26:08.435815] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:10:32.084 00:10:32.085 Compression does not support the verify option, aborting. 00:10:32.085 ************************************ 00:10:32.085 END TEST accel_compress_verify 00:10:32.085 ************************************ 00:10:32.085 16:26:08 -- common/autotest_common.sh@643 -- # es=161 00:10:32.085 16:26:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:32.085 16:26:08 -- common/autotest_common.sh@652 -- # es=33 00:10:32.085 16:26:08 -- common/autotest_common.sh@653 -- # case "$es" in 00:10:32.085 16:26:08 -- common/autotest_common.sh@660 -- # es=1 00:10:32.085 16:26:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:32.085 00:10:32.085 real 0m1.311s 00:10:32.085 user 0m1.100s 00:10:32.085 sys 0m0.167s 00:10:32.085 16:26:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.085 16:26:08 -- common/autotest_common.sh@10 -- # set +x 00:10:32.085 16:26:08 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:10:32.085 16:26:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:32.085 16:26:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:32.085 16:26:08 -- common/autotest_common.sh@10 -- # set +x 00:10:32.085 ************************************ 00:10:32.085 START TEST accel_wrong_workload 00:10:32.085 ************************************ 00:10:32.085 16:26:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:10:32.085 16:26:08 -- common/autotest_common.sh@640 -- # local es=0 00:10:32.085 16:26:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:10:32.085 16:26:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:32.085 16:26:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:32.085 16:26:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:32.085 16:26:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:32.085 16:26:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:10:32.085 16:26:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:10:32.085 16:26:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.085 16:26:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.085 16:26:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.085 16:26:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.085 16:26:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.085 16:26:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.085 16:26:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.085 16:26:08 -- accel/accel.sh@42 -- # jq -r . 00:10:32.085 Unsupported workload type: foobar 00:10:32.085 [2024-07-11 16:26:08.867059] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:10:32.085 accel_perf options: 00:10:32.085 [-h help message] 00:10:32.085 [-q queue depth per core] 00:10:32.085 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:32.085 [-T number of threads per core 00:10:32.085 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:32.085 [-t time in seconds] 00:10:32.085 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:32.085 [ dif_verify, , dif_generate, dif_generate_copy 00:10:32.085 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:32.085 [-l for compress/decompress workloads, name of uncompressed input file 00:10:32.085 [-S for crc32c workload, use this seed value (default 0) 00:10:32.085 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:32.085 [-f for fill workload, use this BYTE value (default 255) 00:10:32.085 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:32.085 [-y verify result if this switch is on] 00:10:32.085 [-a tasks to allocate per core (default: same value as -q)] 00:10:32.085 Can be used to spread operations across a wider range of memory. 00:10:32.344 16:26:08 -- common/autotest_common.sh@643 -- # es=1 00:10:32.344 16:26:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:32.344 16:26:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:32.344 16:26:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:32.344 00:10:32.344 real 0m0.069s 00:10:32.344 user 0m0.090s 00:10:32.344 sys 0m0.025s 00:10:32.344 16:26:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.344 16:26:08 -- common/autotest_common.sh@10 -- # set +x 00:10:32.344 ************************************ 00:10:32.344 END TEST accel_wrong_workload 00:10:32.344 ************************************ 00:10:32.344 16:26:08 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:10:32.344 16:26:08 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:10:32.344 16:26:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:32.344 16:26:08 -- common/autotest_common.sh@10 -- # set +x 00:10:32.344 ************************************ 00:10:32.344 START TEST accel_negative_buffers 00:10:32.344 ************************************ 00:10:32.344 16:26:08 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:10:32.344 16:26:08 -- common/autotest_common.sh@640 -- # local es=0 00:10:32.344 16:26:08 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:10:32.344 16:26:08 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:10:32.344 16:26:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:32.344 16:26:08 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:10:32.344 16:26:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:32.344 16:26:08 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:10:32.344 16:26:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:10:32.344 16:26:08 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.344 16:26:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.344 16:26:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.344 16:26:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.344 16:26:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.344 16:26:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.344 16:26:08 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.344 16:26:08 -- accel/accel.sh@42 -- # jq -r . 00:10:32.344 -x option must be non-negative. 00:10:32.344 [2024-07-11 16:26:08.978344] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:10:32.344 accel_perf options: 00:10:32.344 [-h help message] 00:10:32.344 [-q queue depth per core] 00:10:32.344 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:10:32.344 [-T number of threads per core 00:10:32.344 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:10:32.344 [-t time in seconds] 00:10:32.344 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:10:32.344 [ dif_verify, , dif_generate, dif_generate_copy 00:10:32.344 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:10:32.344 [-l for compress/decompress workloads, name of uncompressed input file 00:10:32.344 [-S for crc32c workload, use this seed value (default 0) 00:10:32.344 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:10:32.344 [-f for fill workload, use this BYTE value (default 255) 00:10:32.345 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:10:32.345 [-y verify result if this switch is on] 00:10:32.345 [-a tasks to allocate per core (default: same value as -q)] 00:10:32.345 Can be used to spread operations across a wider range of memory. 00:10:32.345 16:26:08 -- common/autotest_common.sh@643 -- # es=1 00:10:32.345 16:26:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:32.345 16:26:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:32.345 16:26:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:32.345 00:10:32.345 real 0m0.064s 00:10:32.345 user 0m0.080s 00:10:32.345 sys 0m0.040s 00:10:32.345 16:26:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.345 16:26:09 -- common/autotest_common.sh@10 -- # set +x 00:10:32.345 ************************************ 00:10:32.345 END TEST accel_negative_buffers 00:10:32.345 ************************************ 00:10:32.345 16:26:09 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:10:32.345 16:26:09 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:32.345 16:26:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:32.345 16:26:09 -- common/autotest_common.sh@10 -- # set +x 00:10:32.345 ************************************ 00:10:32.345 START TEST accel_crc32c 00:10:32.345 ************************************ 00:10:32.345 16:26:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:10:32.345 16:26:09 -- accel/accel.sh@16 -- # local accel_opc 00:10:32.345 16:26:09 -- accel/accel.sh@17 -- # local accel_module 00:10:32.345 16:26:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:32.345 16:26:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:32.345 16:26:09 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.345 16:26:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.345 16:26:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.345 16:26:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.345 16:26:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.345 16:26:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.345 16:26:09 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.345 16:26:09 -- accel/accel.sh@42 -- # jq -r . 00:10:32.345 [2024-07-11 16:26:09.096077] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:32.345 [2024-07-11 16:26:09.096387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108612 ] 00:10:32.603 [2024-07-11 16:26:09.259830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.862 [2024-07-11 16:26:09.436683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.764 16:26:11 -- accel/accel.sh@18 -- # out=' 00:10:34.764 SPDK Configuration: 00:10:34.764 Core mask: 0x1 00:10:34.764 00:10:34.764 Accel Perf Configuration: 00:10:34.764 Workload Type: crc32c 00:10:34.764 CRC-32C seed: 32 00:10:34.764 Transfer size: 4096 bytes 00:10:34.764 Vector count 1 00:10:34.764 Module: software 00:10:34.764 Queue depth: 32 00:10:34.764 Allocate depth: 32 00:10:34.764 # threads/core: 1 00:10:34.764 Run time: 1 seconds 00:10:34.764 Verify: Yes 00:10:34.764 00:10:34.764 Running for 1 seconds... 00:10:34.764 00:10:34.764 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:34.764 ------------------------------------------------------------------------------------ 00:10:34.764 0,0 502912/s 1964 MiB/s 0 0 00:10:34.764 ==================================================================================== 00:10:34.764 Total 502912/s 1964 MiB/s 0 0' 00:10:34.764 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:34.764 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:34.764 16:26:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:10:34.764 16:26:11 -- accel/accel.sh@12 -- # build_accel_config 00:10:34.764 16:26:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:10:34.764 16:26:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:34.764 16:26:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:34.764 16:26:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:34.764 16:26:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:34.764 16:26:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:34.764 16:26:11 -- accel/accel.sh@41 -- # local IFS=, 00:10:34.764 16:26:11 -- accel/accel.sh@42 -- # jq -r . 00:10:34.764 [2024-07-11 16:26:11.399408] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:34.764 [2024-07-11 16:26:11.399734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108657 ] 00:10:34.764 [2024-07-11 16:26:11.565846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.023 [2024-07-11 16:26:11.753666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val= 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val= 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val=0x1 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val= 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val= 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val=crc32c 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val=32 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val= 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val=software 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@23 -- # accel_module=software 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val=32 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val=32 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val=1 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val=Yes 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val= 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:35.282 16:26:11 -- accel/accel.sh@21 -- # val= 00:10:35.282 16:26:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # IFS=: 00:10:35.282 16:26:11 -- accel/accel.sh@20 -- # read -r var val 00:10:37.183 16:26:13 -- accel/accel.sh@21 -- # val= 00:10:37.183 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:10:37.183 16:26:13 -- accel/accel.sh@21 -- # val= 00:10:37.183 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:10:37.183 16:26:13 -- accel/accel.sh@21 -- # val= 00:10:37.183 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:10:37.183 16:26:13 -- accel/accel.sh@21 -- # val= 00:10:37.183 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:10:37.183 16:26:13 -- accel/accel.sh@21 -- # val= 00:10:37.183 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:10:37.183 16:26:13 -- accel/accel.sh@21 -- # val= 00:10:37.183 16:26:13 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # IFS=: 00:10:37.183 16:26:13 -- accel/accel.sh@20 -- # read -r var val 00:10:37.183 ************************************ 00:10:37.183 END TEST accel_crc32c 00:10:37.183 ************************************ 00:10:37.183 16:26:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:37.183 16:26:13 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:37.183 16:26:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:37.183 00:10:37.183 real 0m4.629s 00:10:37.183 user 0m4.127s 00:10:37.183 sys 0m0.360s 00:10:37.183 16:26:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:37.183 16:26:13 -- common/autotest_common.sh@10 -- # set +x 00:10:37.183 16:26:13 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:10:37.183 16:26:13 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:37.183 16:26:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:37.183 16:26:13 -- common/autotest_common.sh@10 -- # set +x 00:10:37.183 ************************************ 00:10:37.183 START TEST accel_crc32c_C2 00:10:37.183 ************************************ 00:10:37.183 16:26:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:10:37.183 16:26:13 -- accel/accel.sh@16 -- # local accel_opc 00:10:37.183 16:26:13 -- accel/accel.sh@17 -- # local accel_module 00:10:37.183 16:26:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:37.183 16:26:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:37.183 16:26:13 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.183 16:26:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.183 16:26:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.183 16:26:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.183 16:26:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.183 16:26:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.183 16:26:13 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.183 16:26:13 -- accel/accel.sh@42 -- # jq -r . 00:10:37.183 [2024-07-11 16:26:13.780034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:37.183 [2024-07-11 16:26:13.780918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108705 ] 00:10:37.183 [2024-07-11 16:26:13.950724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.442 [2024-07-11 16:26:14.190878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.343 16:26:16 -- accel/accel.sh@18 -- # out=' 00:10:39.343 SPDK Configuration: 00:10:39.343 Core mask: 0x1 00:10:39.343 00:10:39.343 Accel Perf Configuration: 00:10:39.343 Workload Type: crc32c 00:10:39.343 CRC-32C seed: 0 00:10:39.343 Transfer size: 4096 bytes 00:10:39.343 Vector count 2 00:10:39.343 Module: software 00:10:39.343 Queue depth: 32 00:10:39.343 Allocate depth: 32 00:10:39.343 # threads/core: 1 00:10:39.343 Run time: 1 seconds 00:10:39.343 Verify: Yes 00:10:39.343 00:10:39.343 Running for 1 seconds... 00:10:39.343 00:10:39.343 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:39.343 ------------------------------------------------------------------------------------ 00:10:39.343 0,0 389856/s 3045 MiB/s 0 0 00:10:39.343 ==================================================================================== 00:10:39.343 Total 389856/s 1522 MiB/s 0 0' 00:10:39.343 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:39.343 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:39.343 16:26:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:10:39.343 16:26:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:39.343 16:26:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:10:39.343 16:26:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:39.343 16:26:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.343 16:26:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.344 16:26:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:39.344 16:26:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:39.344 16:26:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:39.344 16:26:16 -- accel/accel.sh@42 -- # jq -r . 00:10:39.602 [2024-07-11 16:26:16.186446] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:39.602 [2024-07-11 16:26:16.186805] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108755 ] 00:10:39.602 [2024-07-11 16:26:16.354565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.860 [2024-07-11 16:26:16.537645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val= 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val= 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val=0x1 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val= 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val= 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val=crc32c 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val=0 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val= 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val=software 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@23 -- # accel_module=software 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val=32 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val=32 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val=1 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val=Yes 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val= 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:40.119 16:26:16 -- accel/accel.sh@21 -- # val= 00:10:40.119 16:26:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # IFS=: 00:10:40.119 16:26:16 -- accel/accel.sh@20 -- # read -r var val 00:10:42.044 16:26:18 -- accel/accel.sh@21 -- # val= 00:10:42.044 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:10:42.044 16:26:18 -- accel/accel.sh@21 -- # val= 00:10:42.044 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:10:42.044 16:26:18 -- accel/accel.sh@21 -- # val= 00:10:42.044 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:10:42.044 16:26:18 -- accel/accel.sh@21 -- # val= 00:10:42.044 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:10:42.044 16:26:18 -- accel/accel.sh@21 -- # val= 00:10:42.044 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:10:42.044 16:26:18 -- accel/accel.sh@21 -- # val= 00:10:42.044 16:26:18 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # IFS=: 00:10:42.044 16:26:18 -- accel/accel.sh@20 -- # read -r var val 00:10:42.044 ************************************ 00:10:42.044 END TEST accel_crc32c_C2 00:10:42.044 ************************************ 00:10:42.044 16:26:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:42.044 16:26:18 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:10:42.044 16:26:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:42.044 00:10:42.044 real 0m4.722s 00:10:42.044 user 0m4.187s 00:10:42.044 sys 0m0.384s 00:10:42.044 16:26:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:42.044 16:26:18 -- common/autotest_common.sh@10 -- # set +x 00:10:42.044 16:26:18 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:42.044 16:26:18 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:42.044 16:26:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:42.044 16:26:18 -- common/autotest_common.sh@10 -- # set +x 00:10:42.044 ************************************ 00:10:42.044 START TEST accel_copy 00:10:42.044 ************************************ 00:10:42.044 16:26:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:10:42.044 16:26:18 -- accel/accel.sh@16 -- # local accel_opc 00:10:42.044 16:26:18 -- accel/accel.sh@17 -- # local accel_module 00:10:42.044 16:26:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:10:42.044 16:26:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:42.044 16:26:18 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.044 16:26:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.044 16:26:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.044 16:26:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.044 16:26:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.044 16:26:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.044 16:26:18 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.044 16:26:18 -- accel/accel.sh@42 -- # jq -r . 00:10:42.044 [2024-07-11 16:26:18.549998] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:42.044 [2024-07-11 16:26:18.550938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108809 ] 00:10:42.044 [2024-07-11 16:26:18.717742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.302 [2024-07-11 16:26:18.917171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.204 16:26:20 -- accel/accel.sh@18 -- # out=' 00:10:44.204 SPDK Configuration: 00:10:44.204 Core mask: 0x1 00:10:44.204 00:10:44.204 Accel Perf Configuration: 00:10:44.204 Workload Type: copy 00:10:44.204 Transfer size: 4096 bytes 00:10:44.204 Vector count 1 00:10:44.204 Module: software 00:10:44.204 Queue depth: 32 00:10:44.204 Allocate depth: 32 00:10:44.204 # threads/core: 1 00:10:44.204 Run time: 1 seconds 00:10:44.204 Verify: Yes 00:10:44.204 00:10:44.204 Running for 1 seconds... 00:10:44.204 00:10:44.204 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:44.204 ------------------------------------------------------------------------------------ 00:10:44.204 0,0 305088/s 1191 MiB/s 0 0 00:10:44.204 ==================================================================================== 00:10:44.204 Total 305088/s 1191 MiB/s 0 0' 00:10:44.204 16:26:20 -- accel/accel.sh@20 -- # IFS=: 00:10:44.204 16:26:20 -- accel/accel.sh@20 -- # read -r var val 00:10:44.204 16:26:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:44.204 16:26:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:44.204 16:26:20 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.204 16:26:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.204 16:26:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.204 16:26:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.204 16:26:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.204 16:26:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.204 16:26:20 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.204 16:26:20 -- accel/accel.sh@42 -- # jq -r . 00:10:44.204 [2024-07-11 16:26:20.890862] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:44.204 [2024-07-11 16:26:20.891208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108843 ] 00:10:44.463 [2024-07-11 16:26:21.058021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.463 [2024-07-11 16:26:21.236271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val= 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val= 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val=0x1 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val= 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val= 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val=copy 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@24 -- # accel_opc=copy 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val= 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val=software 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@23 -- # accel_module=software 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val=32 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val=32 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val=1 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val=Yes 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val= 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:44.722 16:26:21 -- accel/accel.sh@21 -- # val= 00:10:44.722 16:26:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # IFS=: 00:10:44.722 16:26:21 -- accel/accel.sh@20 -- # read -r var val 00:10:46.623 16:26:23 -- accel/accel.sh@21 -- # val= 00:10:46.623 16:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.623 16:26:23 -- accel/accel.sh@20 -- # IFS=: 00:10:46.623 16:26:23 -- accel/accel.sh@20 -- # read -r var val 00:10:46.624 16:26:23 -- accel/accel.sh@21 -- # val= 00:10:46.624 16:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.624 16:26:23 -- accel/accel.sh@20 -- # IFS=: 00:10:46.624 16:26:23 -- accel/accel.sh@20 -- # read -r var val 00:10:46.624 16:26:23 -- accel/accel.sh@21 -- # val= 00:10:46.624 16:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.624 16:26:23 -- accel/accel.sh@20 -- # IFS=: 00:10:46.624 16:26:23 -- accel/accel.sh@20 -- # read -r var val 00:10:46.624 16:26:23 -- accel/accel.sh@21 -- # val= 00:10:46.624 16:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.624 16:26:23 -- accel/accel.sh@20 -- # IFS=: 00:10:46.624 16:26:23 -- accel/accel.sh@20 -- # read -r var val 00:10:46.624 16:26:23 -- accel/accel.sh@21 -- # val= 00:10:46.624 16:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.624 16:26:23 -- accel/accel.sh@20 -- # IFS=: 00:10:46.624 16:26:23 -- accel/accel.sh@20 -- # read -r var val 00:10:46.624 16:26:23 -- accel/accel.sh@21 -- # val= 00:10:46.624 16:26:23 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.624 16:26:23 -- accel/accel.sh@20 -- # IFS=: 00:10:46.624 16:26:23 -- accel/accel.sh@20 -- # read -r var val 00:10:46.624 ************************************ 00:10:46.624 END TEST accel_copy 00:10:46.624 ************************************ 00:10:46.624 16:26:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:46.624 16:26:23 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:10:46.624 16:26:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:46.624 00:10:46.624 real 0m4.646s 00:10:46.624 user 0m4.189s 00:10:46.624 sys 0m0.324s 00:10:46.624 16:26:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.624 16:26:23 -- common/autotest_common.sh@10 -- # set +x 00:10:46.624 16:26:23 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:46.624 16:26:23 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:46.624 16:26:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:46.624 16:26:23 -- common/autotest_common.sh@10 -- # set +x 00:10:46.624 ************************************ 00:10:46.624 START TEST accel_fill 00:10:46.624 ************************************ 00:10:46.624 16:26:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:46.624 16:26:23 -- accel/accel.sh@16 -- # local accel_opc 00:10:46.624 16:26:23 -- accel/accel.sh@17 -- # local accel_module 00:10:46.624 16:26:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:46.624 16:26:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:46.624 16:26:23 -- accel/accel.sh@12 -- # build_accel_config 00:10:46.624 16:26:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:46.624 16:26:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.624 16:26:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.624 16:26:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:46.624 16:26:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:46.624 16:26:23 -- accel/accel.sh@41 -- # local IFS=, 00:10:46.624 16:26:23 -- accel/accel.sh@42 -- # jq -r . 00:10:46.624 [2024-07-11 16:26:23.253624] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:46.624 [2024-07-11 16:26:23.254460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108890 ] 00:10:46.624 [2024-07-11 16:26:23.422375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.882 [2024-07-11 16:26:23.608960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.782 16:26:25 -- accel/accel.sh@18 -- # out=' 00:10:48.782 SPDK Configuration: 00:10:48.782 Core mask: 0x1 00:10:48.782 00:10:48.782 Accel Perf Configuration: 00:10:48.782 Workload Type: fill 00:10:48.782 Fill pattern: 0x80 00:10:48.782 Transfer size: 4096 bytes 00:10:48.782 Vector count 1 00:10:48.782 Module: software 00:10:48.782 Queue depth: 64 00:10:48.782 Allocate depth: 64 00:10:48.782 # threads/core: 1 00:10:48.782 Run time: 1 seconds 00:10:48.782 Verify: Yes 00:10:48.782 00:10:48.782 Running for 1 seconds... 00:10:48.782 00:10:48.782 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:48.782 ------------------------------------------------------------------------------------ 00:10:48.782 0,0 469504/s 1834 MiB/s 0 0 00:10:48.783 ==================================================================================== 00:10:48.783 Total 469504/s 1834 MiB/s 0 0' 00:10:48.783 16:26:25 -- accel/accel.sh@20 -- # IFS=: 00:10:48.783 16:26:25 -- accel/accel.sh@20 -- # read -r var val 00:10:48.783 16:26:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:48.783 16:26:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:48.783 16:26:25 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.783 16:26:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.783 16:26:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.783 16:26:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.783 16:26:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.783 16:26:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.783 16:26:25 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.783 16:26:25 -- accel/accel.sh@42 -- # jq -r . 00:10:48.783 [2024-07-11 16:26:25.566295] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:48.783 [2024-07-11 16:26:25.566670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108926 ] 00:10:49.040 [2024-07-11 16:26:25.734544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.298 [2024-07-11 16:26:25.905185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val= 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val= 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val=0x1 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val= 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val= 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val=fill 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val=0x80 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val= 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val=software 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@23 -- # accel_module=software 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val=64 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val=64 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val=1 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val=Yes 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val= 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:49.298 16:26:26 -- accel/accel.sh@21 -- # val= 00:10:49.298 16:26:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # IFS=: 00:10:49.298 16:26:26 -- accel/accel.sh@20 -- # read -r var val 00:10:51.199 16:26:27 -- accel/accel.sh@21 -- # val= 00:10:51.199 16:26:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.199 16:26:27 -- accel/accel.sh@20 -- # IFS=: 00:10:51.199 16:26:27 -- accel/accel.sh@20 -- # read -r var val 00:10:51.199 16:26:27 -- accel/accel.sh@21 -- # val= 00:10:51.199 16:26:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.200 16:26:27 -- accel/accel.sh@20 -- # IFS=: 00:10:51.200 16:26:27 -- accel/accel.sh@20 -- # read -r var val 00:10:51.200 16:26:27 -- accel/accel.sh@21 -- # val= 00:10:51.200 16:26:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.200 16:26:27 -- accel/accel.sh@20 -- # IFS=: 00:10:51.200 16:26:27 -- accel/accel.sh@20 -- # read -r var val 00:10:51.200 16:26:27 -- accel/accel.sh@21 -- # val= 00:10:51.200 16:26:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.200 16:26:27 -- accel/accel.sh@20 -- # IFS=: 00:10:51.200 16:26:27 -- accel/accel.sh@20 -- # read -r var val 00:10:51.200 16:26:27 -- accel/accel.sh@21 -- # val= 00:10:51.200 16:26:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.200 16:26:27 -- accel/accel.sh@20 -- # IFS=: 00:10:51.200 16:26:27 -- accel/accel.sh@20 -- # read -r var val 00:10:51.200 16:26:27 -- accel/accel.sh@21 -- # val= 00:10:51.200 16:26:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.200 16:26:27 -- accel/accel.sh@20 -- # IFS=: 00:10:51.200 16:26:27 -- accel/accel.sh@20 -- # read -r var val 00:10:51.200 ************************************ 00:10:51.200 END TEST accel_fill 00:10:51.200 ************************************ 00:10:51.200 16:26:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:51.200 16:26:27 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:51.200 16:26:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:51.200 00:10:51.200 real 0m4.619s 00:10:51.200 user 0m4.101s 00:10:51.200 sys 0m0.368s 00:10:51.200 16:26:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.200 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:10:51.200 16:26:27 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:51.200 16:26:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:10:51.200 16:26:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:51.200 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:10:51.200 ************************************ 00:10:51.200 START TEST accel_copy_crc32c 00:10:51.200 ************************************ 00:10:51.200 16:26:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:10:51.200 16:26:27 -- accel/accel.sh@16 -- # local accel_opc 00:10:51.200 16:26:27 -- accel/accel.sh@17 -- # local accel_module 00:10:51.200 16:26:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:51.200 16:26:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:51.200 16:26:27 -- accel/accel.sh@12 -- # build_accel_config 00:10:51.200 16:26:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:51.200 16:26:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.200 16:26:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.200 16:26:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:51.200 16:26:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:51.200 16:26:27 -- accel/accel.sh@41 -- # local IFS=, 00:10:51.200 16:26:27 -- accel/accel.sh@42 -- # jq -r . 00:10:51.200 [2024-07-11 16:26:27.921137] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:51.200 [2024-07-11 16:26:27.922023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109000 ] 00:10:51.458 [2024-07-11 16:26:28.089594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.717 [2024-07-11 16:26:28.276251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.617 16:26:30 -- accel/accel.sh@18 -- # out=' 00:10:53.618 SPDK Configuration: 00:10:53.618 Core mask: 0x1 00:10:53.618 00:10:53.618 Accel Perf Configuration: 00:10:53.618 Workload Type: copy_crc32c 00:10:53.618 CRC-32C seed: 0 00:10:53.618 Vector size: 4096 bytes 00:10:53.618 Transfer size: 4096 bytes 00:10:53.618 Vector count 1 00:10:53.618 Module: software 00:10:53.618 Queue depth: 32 00:10:53.618 Allocate depth: 32 00:10:53.618 # threads/core: 1 00:10:53.618 Run time: 1 seconds 00:10:53.618 Verify: Yes 00:10:53.618 00:10:53.618 Running for 1 seconds... 00:10:53.618 00:10:53.618 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:53.618 ------------------------------------------------------------------------------------ 00:10:53.618 0,0 254880/s 995 MiB/s 0 0 00:10:53.618 ==================================================================================== 00:10:53.618 Total 254880/s 995 MiB/s 0 0' 00:10:53.618 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:53.618 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:53.618 16:26:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:53.618 16:26:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:53.618 16:26:30 -- accel/accel.sh@12 -- # build_accel_config 00:10:53.618 16:26:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:53.618 16:26:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.618 16:26:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.618 16:26:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:53.618 16:26:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:53.618 16:26:30 -- accel/accel.sh@41 -- # local IFS=, 00:10:53.618 16:26:30 -- accel/accel.sh@42 -- # jq -r . 00:10:53.618 [2024-07-11 16:26:30.247632] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:53.618 [2024-07-11 16:26:30.248145] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109033 ] 00:10:53.618 [2024-07-11 16:26:30.413585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.876 [2024-07-11 16:26:30.588278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val= 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val= 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val=0x1 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val= 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val= 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val=0 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val= 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val=software 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@23 -- # accel_module=software 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val=32 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.140 16:26:30 -- accel/accel.sh@21 -- # val=32 00:10:54.140 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.140 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.141 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.141 16:26:30 -- accel/accel.sh@21 -- # val=1 00:10:54.141 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.141 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.141 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.141 16:26:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:54.141 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.141 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.141 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.141 16:26:30 -- accel/accel.sh@21 -- # val=Yes 00:10:54.141 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.141 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.141 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.141 16:26:30 -- accel/accel.sh@21 -- # val= 00:10:54.141 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.141 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.141 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:54.141 16:26:30 -- accel/accel.sh@21 -- # val= 00:10:54.141 16:26:30 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.141 16:26:30 -- accel/accel.sh@20 -- # IFS=: 00:10:54.141 16:26:30 -- accel/accel.sh@20 -- # read -r var val 00:10:56.097 16:26:32 -- accel/accel.sh@21 -- # val= 00:10:56.097 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:10:56.097 16:26:32 -- accel/accel.sh@21 -- # val= 00:10:56.097 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:10:56.097 16:26:32 -- accel/accel.sh@21 -- # val= 00:10:56.097 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:10:56.097 16:26:32 -- accel/accel.sh@21 -- # val= 00:10:56.097 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:10:56.097 16:26:32 -- accel/accel.sh@21 -- # val= 00:10:56.097 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:10:56.097 16:26:32 -- accel/accel.sh@21 -- # val= 00:10:56.097 16:26:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # IFS=: 00:10:56.097 16:26:32 -- accel/accel.sh@20 -- # read -r var val 00:10:56.097 ************************************ 00:10:56.097 END TEST accel_copy_crc32c 00:10:56.097 ************************************ 00:10:56.097 16:26:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:56.097 16:26:32 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:56.097 16:26:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:56.097 00:10:56.097 real 0m4.649s 00:10:56.097 user 0m4.140s 00:10:56.097 sys 0m0.351s 00:10:56.097 16:26:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:56.097 16:26:32 -- common/autotest_common.sh@10 -- # set +x 00:10:56.097 16:26:32 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:56.097 16:26:32 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:10:56.097 16:26:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:56.097 16:26:32 -- common/autotest_common.sh@10 -- # set +x 00:10:56.097 ************************************ 00:10:56.097 START TEST accel_copy_crc32c_C2 00:10:56.097 ************************************ 00:10:56.097 16:26:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:56.097 16:26:32 -- accel/accel.sh@16 -- # local accel_opc 00:10:56.097 16:26:32 -- accel/accel.sh@17 -- # local accel_module 00:10:56.097 16:26:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:56.097 16:26:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:56.098 16:26:32 -- accel/accel.sh@12 -- # build_accel_config 00:10:56.098 16:26:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:56.098 16:26:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:56.098 16:26:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:56.098 16:26:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:56.098 16:26:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:56.098 16:26:32 -- accel/accel.sh@41 -- # local IFS=, 00:10:56.098 16:26:32 -- accel/accel.sh@42 -- # jq -r . 00:10:56.098 [2024-07-11 16:26:32.615020] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:56.098 [2024-07-11 16:26:32.615429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109080 ] 00:10:56.098 [2024-07-11 16:26:32.767423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.356 [2024-07-11 16:26:32.955367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.258 16:26:34 -- accel/accel.sh@18 -- # out=' 00:10:58.258 SPDK Configuration: 00:10:58.258 Core mask: 0x1 00:10:58.258 00:10:58.258 Accel Perf Configuration: 00:10:58.258 Workload Type: copy_crc32c 00:10:58.258 CRC-32C seed: 0 00:10:58.258 Vector size: 4096 bytes 00:10:58.258 Transfer size: 8192 bytes 00:10:58.258 Vector count 2 00:10:58.258 Module: software 00:10:58.258 Queue depth: 32 00:10:58.258 Allocate depth: 32 00:10:58.258 # threads/core: 1 00:10:58.258 Run time: 1 seconds 00:10:58.258 Verify: Yes 00:10:58.258 00:10:58.258 Running for 1 seconds... 00:10:58.258 00:10:58.258 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:58.258 ------------------------------------------------------------------------------------ 00:10:58.258 0,0 175392/s 1370 MiB/s 0 0 00:10:58.258 ==================================================================================== 00:10:58.258 Total 175392/s 685 MiB/s 0 0' 00:10:58.258 16:26:34 -- accel/accel.sh@20 -- # IFS=: 00:10:58.258 16:26:34 -- accel/accel.sh@20 -- # read -r var val 00:10:58.258 16:26:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:58.258 16:26:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:58.258 16:26:34 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.258 16:26:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.258 16:26:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.258 16:26:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.258 16:26:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.258 16:26:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.258 16:26:34 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.258 16:26:34 -- accel/accel.sh@42 -- # jq -r . 00:10:58.258 [2024-07-11 16:26:34.910123] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:10:58.258 [2024-07-11 16:26:34.910468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109121 ] 00:10:58.516 [2024-07-11 16:26:35.069814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.516 [2024-07-11 16:26:35.256687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val= 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val= 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val=0x1 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val= 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val= 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val=0 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val= 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val=software 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@23 -- # accel_module=software 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val=32 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val=32 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val=1 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val=Yes 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val= 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:10:58.774 16:26:35 -- accel/accel.sh@21 -- # val= 00:10:58.774 16:26:35 -- accel/accel.sh@22 -- # case "$var" in 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # IFS=: 00:10:58.774 16:26:35 -- accel/accel.sh@20 -- # read -r var val 00:11:00.675 16:26:37 -- accel/accel.sh@21 -- # val= 00:11:00.675 16:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # IFS=: 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # read -r var val 00:11:00.675 16:26:37 -- accel/accel.sh@21 -- # val= 00:11:00.675 16:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # IFS=: 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # read -r var val 00:11:00.675 16:26:37 -- accel/accel.sh@21 -- # val= 00:11:00.675 16:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # IFS=: 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # read -r var val 00:11:00.675 16:26:37 -- accel/accel.sh@21 -- # val= 00:11:00.675 16:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # IFS=: 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # read -r var val 00:11:00.675 16:26:37 -- accel/accel.sh@21 -- # val= 00:11:00.675 16:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # IFS=: 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # read -r var val 00:11:00.675 16:26:37 -- accel/accel.sh@21 -- # val= 00:11:00.675 16:26:37 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # IFS=: 00:11:00.675 16:26:37 -- accel/accel.sh@20 -- # read -r var val 00:11:00.675 ************************************ 00:11:00.675 END TEST accel_copy_crc32c_C2 00:11:00.675 ************************************ 00:11:00.675 16:26:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:00.675 16:26:37 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:11:00.675 16:26:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:00.675 00:11:00.675 real 0m4.582s 00:11:00.675 user 0m4.097s 00:11:00.675 sys 0m0.347s 00:11:00.675 16:26:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.675 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:11:00.675 16:26:37 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:00.675 16:26:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:00.675 16:26:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:00.675 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:11:00.675 ************************************ 00:11:00.675 START TEST accel_dualcast 00:11:00.675 ************************************ 00:11:00.675 16:26:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:11:00.675 16:26:37 -- accel/accel.sh@16 -- # local accel_opc 00:11:00.675 16:26:37 -- accel/accel.sh@17 -- # local accel_module 00:11:00.675 16:26:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:11:00.675 16:26:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:00.675 16:26:37 -- accel/accel.sh@12 -- # build_accel_config 00:11:00.675 16:26:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:00.675 16:26:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.675 16:26:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.675 16:26:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:00.675 16:26:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:00.675 16:26:37 -- accel/accel.sh@41 -- # local IFS=, 00:11:00.675 16:26:37 -- accel/accel.sh@42 -- # jq -r . 00:11:00.675 [2024-07-11 16:26:37.258433] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:00.675 [2024-07-11 16:26:37.258804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109186 ] 00:11:00.675 [2024-07-11 16:26:37.426879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.933 [2024-07-11 16:26:37.607303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.835 16:26:39 -- accel/accel.sh@18 -- # out=' 00:11:02.835 SPDK Configuration: 00:11:02.835 Core mask: 0x1 00:11:02.835 00:11:02.835 Accel Perf Configuration: 00:11:02.835 Workload Type: dualcast 00:11:02.835 Transfer size: 4096 bytes 00:11:02.835 Vector count 1 00:11:02.835 Module: software 00:11:02.835 Queue depth: 32 00:11:02.835 Allocate depth: 32 00:11:02.835 # threads/core: 1 00:11:02.835 Run time: 1 seconds 00:11:02.835 Verify: Yes 00:11:02.835 00:11:02.835 Running for 1 seconds... 00:11:02.835 00:11:02.835 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:02.835 ------------------------------------------------------------------------------------ 00:11:02.835 0,0 315072/s 1230 MiB/s 0 0 00:11:02.835 ==================================================================================== 00:11:02.835 Total 315072/s 1230 MiB/s 0 0' 00:11:02.835 16:26:39 -- accel/accel.sh@20 -- # IFS=: 00:11:02.835 16:26:39 -- accel/accel.sh@20 -- # read -r var val 00:11:02.835 16:26:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:02.835 16:26:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:02.835 16:26:39 -- accel/accel.sh@12 -- # build_accel_config 00:11:02.835 16:26:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:02.835 16:26:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.835 16:26:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.835 16:26:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:02.835 16:26:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:02.835 16:26:39 -- accel/accel.sh@41 -- # local IFS=, 00:11:02.835 16:26:39 -- accel/accel.sh@42 -- # jq -r . 00:11:02.835 [2024-07-11 16:26:39.613312] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:02.835 [2024-07-11 16:26:39.613656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109220 ] 00:11:03.093 [2024-07-11 16:26:39.778466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.351 [2024-07-11 16:26:39.965488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val= 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val= 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val=0x1 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val= 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val= 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val=dualcast 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val= 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val=software 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@23 -- # accel_module=software 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val=32 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val=32 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val=1 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val=Yes 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val= 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:03.351 16:26:40 -- accel/accel.sh@21 -- # val= 00:11:03.351 16:26:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # IFS=: 00:11:03.351 16:26:40 -- accel/accel.sh@20 -- # read -r var val 00:11:05.252 16:26:41 -- accel/accel.sh@21 -- # val= 00:11:05.252 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:11:05.252 16:26:41 -- accel/accel.sh@21 -- # val= 00:11:05.252 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:11:05.252 16:26:41 -- accel/accel.sh@21 -- # val= 00:11:05.252 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:11:05.252 16:26:41 -- accel/accel.sh@21 -- # val= 00:11:05.252 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:11:05.252 16:26:41 -- accel/accel.sh@21 -- # val= 00:11:05.252 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:11:05.252 16:26:41 -- accel/accel.sh@21 -- # val= 00:11:05.252 16:26:41 -- accel/accel.sh@22 -- # case "$var" in 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # IFS=: 00:11:05.252 16:26:41 -- accel/accel.sh@20 -- # read -r var val 00:11:05.252 ************************************ 00:11:05.252 END TEST accel_dualcast 00:11:05.252 ************************************ 00:11:05.252 16:26:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:05.252 16:26:41 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:11:05.252 16:26:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:05.252 00:11:05.252 real 0m4.670s 00:11:05.252 user 0m4.156s 00:11:05.252 sys 0m0.376s 00:11:05.252 16:26:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:05.252 16:26:41 -- common/autotest_common.sh@10 -- # set +x 00:11:05.252 16:26:41 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:05.252 16:26:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:05.252 16:26:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:05.252 16:26:41 -- common/autotest_common.sh@10 -- # set +x 00:11:05.252 ************************************ 00:11:05.252 START TEST accel_compare 00:11:05.252 ************************************ 00:11:05.252 16:26:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:11:05.252 16:26:41 -- accel/accel.sh@16 -- # local accel_opc 00:11:05.252 16:26:41 -- accel/accel.sh@17 -- # local accel_module 00:11:05.252 16:26:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:11:05.252 16:26:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:05.252 16:26:41 -- accel/accel.sh@12 -- # build_accel_config 00:11:05.252 16:26:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:05.252 16:26:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:05.252 16:26:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:05.252 16:26:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:05.252 16:26:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:05.252 16:26:41 -- accel/accel.sh@41 -- # local IFS=, 00:11:05.252 16:26:41 -- accel/accel.sh@42 -- # jq -r . 00:11:05.252 [2024-07-11 16:26:41.978367] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:05.252 [2024-07-11 16:26:41.978723] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109274 ] 00:11:05.510 [2024-07-11 16:26:42.146600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.768 [2024-07-11 16:26:42.336167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.668 16:26:44 -- accel/accel.sh@18 -- # out=' 00:11:07.668 SPDK Configuration: 00:11:07.668 Core mask: 0x1 00:11:07.668 00:11:07.668 Accel Perf Configuration: 00:11:07.668 Workload Type: compare 00:11:07.668 Transfer size: 4096 bytes 00:11:07.668 Vector count 1 00:11:07.668 Module: software 00:11:07.668 Queue depth: 32 00:11:07.668 Allocate depth: 32 00:11:07.668 # threads/core: 1 00:11:07.668 Run time: 1 seconds 00:11:07.668 Verify: Yes 00:11:07.668 00:11:07.668 Running for 1 seconds... 00:11:07.668 00:11:07.668 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:07.668 ------------------------------------------------------------------------------------ 00:11:07.668 0,0 461184/s 1801 MiB/s 0 0 00:11:07.668 ==================================================================================== 00:11:07.668 Total 461184/s 1801 MiB/s 0 0' 00:11:07.668 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:07.668 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:07.668 16:26:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:07.668 16:26:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:07.668 16:26:44 -- accel/accel.sh@12 -- # build_accel_config 00:11:07.668 16:26:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:07.668 16:26:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.668 16:26:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.668 16:26:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:07.668 16:26:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:07.668 16:26:44 -- accel/accel.sh@41 -- # local IFS=, 00:11:07.668 16:26:44 -- accel/accel.sh@42 -- # jq -r . 00:11:07.668 [2024-07-11 16:26:44.283686] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:07.669 [2024-07-11 16:26:44.284031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109308 ] 00:11:07.669 [2024-07-11 16:26:44.451083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.926 [2024-07-11 16:26:44.625332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.183 16:26:44 -- accel/accel.sh@21 -- # val= 00:11:08.183 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.183 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.183 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.183 16:26:44 -- accel/accel.sh@21 -- # val= 00:11:08.183 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.183 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.183 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.183 16:26:44 -- accel/accel.sh@21 -- # val=0x1 00:11:08.183 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.183 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.183 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.183 16:26:44 -- accel/accel.sh@21 -- # val= 00:11:08.183 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.183 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.183 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.183 16:26:44 -- accel/accel.sh@21 -- # val= 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.184 16:26:44 -- accel/accel.sh@21 -- # val=compare 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@24 -- # accel_opc=compare 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.184 16:26:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.184 16:26:44 -- accel/accel.sh@21 -- # val= 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.184 16:26:44 -- accel/accel.sh@21 -- # val=software 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@23 -- # accel_module=software 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.184 16:26:44 -- accel/accel.sh@21 -- # val=32 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.184 16:26:44 -- accel/accel.sh@21 -- # val=32 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.184 16:26:44 -- accel/accel.sh@21 -- # val=1 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.184 16:26:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.184 16:26:44 -- accel/accel.sh@21 -- # val=Yes 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.184 16:26:44 -- accel/accel.sh@21 -- # val= 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:08.184 16:26:44 -- accel/accel.sh@21 -- # val= 00:11:08.184 16:26:44 -- accel/accel.sh@22 -- # case "$var" in 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # IFS=: 00:11:08.184 16:26:44 -- accel/accel.sh@20 -- # read -r var val 00:11:10.083 16:26:46 -- accel/accel.sh@21 -- # val= 00:11:10.083 16:26:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # IFS=: 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # read -r var val 00:11:10.083 16:26:46 -- accel/accel.sh@21 -- # val= 00:11:10.083 16:26:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # IFS=: 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # read -r var val 00:11:10.083 16:26:46 -- accel/accel.sh@21 -- # val= 00:11:10.083 16:26:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # IFS=: 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # read -r var val 00:11:10.083 16:26:46 -- accel/accel.sh@21 -- # val= 00:11:10.083 16:26:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # IFS=: 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # read -r var val 00:11:10.083 16:26:46 -- accel/accel.sh@21 -- # val= 00:11:10.083 16:26:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # IFS=: 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # read -r var val 00:11:10.083 16:26:46 -- accel/accel.sh@21 -- # val= 00:11:10.083 16:26:46 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # IFS=: 00:11:10.083 16:26:46 -- accel/accel.sh@20 -- # read -r var val 00:11:10.083 ************************************ 00:11:10.083 END TEST accel_compare 00:11:10.083 ************************************ 00:11:10.083 16:26:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:10.083 16:26:46 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:11:10.083 16:26:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:10.083 00:11:10.083 real 0m4.618s 00:11:10.083 user 0m4.139s 00:11:10.083 sys 0m0.343s 00:11:10.083 16:26:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.083 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 16:26:46 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:10.083 16:26:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:10.083 16:26:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:10.083 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:11:10.083 ************************************ 00:11:10.083 START TEST accel_xor 00:11:10.083 ************************************ 00:11:10.083 16:26:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:11:10.083 16:26:46 -- accel/accel.sh@16 -- # local accel_opc 00:11:10.083 16:26:46 -- accel/accel.sh@17 -- # local accel_module 00:11:10.083 16:26:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:11:10.083 16:26:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:10.083 16:26:46 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.083 16:26:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.084 16:26:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.084 16:26:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.084 16:26:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.084 16:26:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.084 16:26:46 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.084 16:26:46 -- accel/accel.sh@42 -- # jq -r . 00:11:10.084 [2024-07-11 16:26:46.644348] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:10.084 [2024-07-11 16:26:46.645320] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109372 ] 00:11:10.084 [2024-07-11 16:26:46.817914] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.342 [2024-07-11 16:26:47.008610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.242 16:26:48 -- accel/accel.sh@18 -- # out=' 00:11:12.242 SPDK Configuration: 00:11:12.242 Core mask: 0x1 00:11:12.242 00:11:12.242 Accel Perf Configuration: 00:11:12.242 Workload Type: xor 00:11:12.242 Source buffers: 2 00:11:12.242 Transfer size: 4096 bytes 00:11:12.242 Vector count 1 00:11:12.242 Module: software 00:11:12.242 Queue depth: 32 00:11:12.242 Allocate depth: 32 00:11:12.242 # threads/core: 1 00:11:12.242 Run time: 1 seconds 00:11:12.242 Verify: Yes 00:11:12.242 00:11:12.242 Running for 1 seconds... 00:11:12.242 00:11:12.242 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:12.242 ------------------------------------------------------------------------------------ 00:11:12.243 0,0 245600/s 959 MiB/s 0 0 00:11:12.243 ==================================================================================== 00:11:12.243 Total 245600/s 959 MiB/s 0 0' 00:11:12.243 16:26:48 -- accel/accel.sh@20 -- # IFS=: 00:11:12.243 16:26:48 -- accel/accel.sh@20 -- # read -r var val 00:11:12.243 16:26:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:12.243 16:26:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:12.243 16:26:48 -- accel/accel.sh@12 -- # build_accel_config 00:11:12.243 16:26:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:12.243 16:26:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:12.243 16:26:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:12.243 16:26:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:12.243 16:26:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:12.243 16:26:48 -- accel/accel.sh@41 -- # local IFS=, 00:11:12.243 16:26:48 -- accel/accel.sh@42 -- # jq -r . 00:11:12.243 [2024-07-11 16:26:48.975236] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:12.243 [2024-07-11 16:26:48.975584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109417 ] 00:11:12.501 [2024-07-11 16:26:49.141705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.759 [2024-07-11 16:26:49.313232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val= 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val= 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val=0x1 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val= 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val= 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val=xor 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val=2 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val= 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val=software 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@23 -- # accel_module=software 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val=32 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val=32 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val=1 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val=Yes 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val= 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:12.759 16:26:49 -- accel/accel.sh@21 -- # val= 00:11:12.759 16:26:49 -- accel/accel.sh@22 -- # case "$var" in 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # IFS=: 00:11:12.759 16:26:49 -- accel/accel.sh@20 -- # read -r var val 00:11:14.660 16:26:51 -- accel/accel.sh@21 -- # val= 00:11:14.660 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:11:14.660 16:26:51 -- accel/accel.sh@21 -- # val= 00:11:14.660 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:11:14.660 16:26:51 -- accel/accel.sh@21 -- # val= 00:11:14.660 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:11:14.660 16:26:51 -- accel/accel.sh@21 -- # val= 00:11:14.660 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:11:14.660 16:26:51 -- accel/accel.sh@21 -- # val= 00:11:14.660 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:11:14.660 16:26:51 -- accel/accel.sh@21 -- # val= 00:11:14.660 16:26:51 -- accel/accel.sh@22 -- # case "$var" in 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # IFS=: 00:11:14.660 16:26:51 -- accel/accel.sh@20 -- # read -r var val 00:11:14.660 ************************************ 00:11:14.660 END TEST accel_xor 00:11:14.660 ************************************ 00:11:14.660 16:26:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:14.660 16:26:51 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:14.660 16:26:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:14.660 00:11:14.660 real 0m4.683s 00:11:14.660 user 0m4.180s 00:11:14.660 sys 0m0.345s 00:11:14.660 16:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.660 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:11:14.660 16:26:51 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:14.660 16:26:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:14.660 16:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.660 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:11:14.660 ************************************ 00:11:14.660 START TEST accel_xor 00:11:14.660 ************************************ 00:11:14.660 16:26:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:11:14.660 16:26:51 -- accel/accel.sh@16 -- # local accel_opc 00:11:14.660 16:26:51 -- accel/accel.sh@17 -- # local accel_module 00:11:14.660 16:26:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:11:14.661 16:26:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:14.661 16:26:51 -- accel/accel.sh@12 -- # build_accel_config 00:11:14.661 16:26:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:14.661 16:26:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:14.661 16:26:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:14.661 16:26:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:14.661 16:26:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:14.661 16:26:51 -- accel/accel.sh@41 -- # local IFS=, 00:11:14.661 16:26:51 -- accel/accel.sh@42 -- # jq -r . 00:11:14.661 [2024-07-11 16:26:51.375392] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:14.661 [2024-07-11 16:26:51.375756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109469 ] 00:11:14.919 [2024-07-11 16:26:51.542766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.177 [2024-07-11 16:26:51.740803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.078 16:26:53 -- accel/accel.sh@18 -- # out=' 00:11:17.078 SPDK Configuration: 00:11:17.078 Core mask: 0x1 00:11:17.078 00:11:17.078 Accel Perf Configuration: 00:11:17.078 Workload Type: xor 00:11:17.078 Source buffers: 3 00:11:17.078 Transfer size: 4096 bytes 00:11:17.078 Vector count 1 00:11:17.078 Module: software 00:11:17.078 Queue depth: 32 00:11:17.078 Allocate depth: 32 00:11:17.078 # threads/core: 1 00:11:17.078 Run time: 1 seconds 00:11:17.078 Verify: Yes 00:11:17.078 00:11:17.078 Running for 1 seconds... 00:11:17.078 00:11:17.078 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:17.078 ------------------------------------------------------------------------------------ 00:11:17.078 0,0 235008/s 918 MiB/s 0 0 00:11:17.078 ==================================================================================== 00:11:17.078 Total 235008/s 918 MiB/s 0 0' 00:11:17.078 16:26:53 -- accel/accel.sh@20 -- # IFS=: 00:11:17.078 16:26:53 -- accel/accel.sh@20 -- # read -r var val 00:11:17.078 16:26:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:17.078 16:26:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:17.078 16:26:53 -- accel/accel.sh@12 -- # build_accel_config 00:11:17.078 16:26:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:17.078 16:26:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.078 16:26:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.078 16:26:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:17.078 16:26:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:17.078 16:26:53 -- accel/accel.sh@41 -- # local IFS=, 00:11:17.078 16:26:53 -- accel/accel.sh@42 -- # jq -r . 00:11:17.078 [2024-07-11 16:26:53.701970] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:17.078 [2024-07-11 16:26:53.702165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109501 ] 00:11:17.078 [2024-07-11 16:26:53.867180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.336 [2024-07-11 16:26:54.059785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.594 16:26:54 -- accel/accel.sh@21 -- # val= 00:11:17.594 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.594 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.594 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.594 16:26:54 -- accel/accel.sh@21 -- # val= 00:11:17.594 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.594 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.594 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.594 16:26:54 -- accel/accel.sh@21 -- # val=0x1 00:11:17.594 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.594 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.594 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.594 16:26:54 -- accel/accel.sh@21 -- # val= 00:11:17.594 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.594 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.594 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.594 16:26:54 -- accel/accel.sh@21 -- # val= 00:11:17.594 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.594 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.594 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.594 16:26:54 -- accel/accel.sh@21 -- # val=xor 00:11:17.594 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.594 16:26:54 -- accel/accel.sh@24 -- # accel_opc=xor 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.595 16:26:54 -- accel/accel.sh@21 -- # val=3 00:11:17.595 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.595 16:26:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:17.595 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.595 16:26:54 -- accel/accel.sh@21 -- # val= 00:11:17.595 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.595 16:26:54 -- accel/accel.sh@21 -- # val=software 00:11:17.595 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.595 16:26:54 -- accel/accel.sh@23 -- # accel_module=software 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.595 16:26:54 -- accel/accel.sh@21 -- # val=32 00:11:17.595 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.595 16:26:54 -- accel/accel.sh@21 -- # val=32 00:11:17.595 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.595 16:26:54 -- accel/accel.sh@21 -- # val=1 00:11:17.595 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.595 16:26:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:17.595 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.595 16:26:54 -- accel/accel.sh@21 -- # val=Yes 00:11:17.595 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.595 16:26:54 -- accel/accel.sh@21 -- # val= 00:11:17.595 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:17.595 16:26:54 -- accel/accel.sh@21 -- # val= 00:11:17.595 16:26:54 -- accel/accel.sh@22 -- # case "$var" in 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # IFS=: 00:11:17.595 16:26:54 -- accel/accel.sh@20 -- # read -r var val 00:11:19.495 16:26:55 -- accel/accel.sh@21 -- # val= 00:11:19.495 16:26:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # IFS=: 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # read -r var val 00:11:19.495 16:26:55 -- accel/accel.sh@21 -- # val= 00:11:19.495 16:26:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # IFS=: 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # read -r var val 00:11:19.495 16:26:55 -- accel/accel.sh@21 -- # val= 00:11:19.495 16:26:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # IFS=: 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # read -r var val 00:11:19.495 16:26:55 -- accel/accel.sh@21 -- # val= 00:11:19.495 16:26:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # IFS=: 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # read -r var val 00:11:19.495 16:26:55 -- accel/accel.sh@21 -- # val= 00:11:19.495 16:26:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # IFS=: 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # read -r var val 00:11:19.495 16:26:55 -- accel/accel.sh@21 -- # val= 00:11:19.495 16:26:55 -- accel/accel.sh@22 -- # case "$var" in 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # IFS=: 00:11:19.495 16:26:55 -- accel/accel.sh@20 -- # read -r var val 00:11:19.495 ************************************ 00:11:19.495 END TEST accel_xor 00:11:19.495 ************************************ 00:11:19.495 16:26:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:19.495 16:26:55 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:11:19.495 16:26:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:19.495 00:11:19.495 real 0m4.656s 00:11:19.495 user 0m4.171s 00:11:19.495 sys 0m0.322s 00:11:19.495 16:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.495 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:11:19.495 16:26:56 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:19.495 16:26:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:19.495 16:26:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.495 16:26:56 -- common/autotest_common.sh@10 -- # set +x 00:11:19.495 ************************************ 00:11:19.495 START TEST accel_dif_verify 00:11:19.495 ************************************ 00:11:19.495 16:26:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:11:19.495 16:26:56 -- accel/accel.sh@16 -- # local accel_opc 00:11:19.495 16:26:56 -- accel/accel.sh@17 -- # local accel_module 00:11:19.495 16:26:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:11:19.495 16:26:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:19.495 16:26:56 -- accel/accel.sh@12 -- # build_accel_config 00:11:19.495 16:26:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:19.495 16:26:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.495 16:26:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.495 16:26:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:19.495 16:26:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:19.495 16:26:56 -- accel/accel.sh@41 -- # local IFS=, 00:11:19.495 16:26:56 -- accel/accel.sh@42 -- # jq -r . 00:11:19.495 [2024-07-11 16:26:56.078934] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:19.495 [2024-07-11 16:26:56.079132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109548 ] 00:11:19.495 [2024-07-11 16:26:56.234068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.768 [2024-07-11 16:26:56.409462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.687 16:26:58 -- accel/accel.sh@18 -- # out=' 00:11:21.687 SPDK Configuration: 00:11:21.687 Core mask: 0x1 00:11:21.687 00:11:21.687 Accel Perf Configuration: 00:11:21.687 Workload Type: dif_verify 00:11:21.687 Vector size: 4096 bytes 00:11:21.687 Transfer size: 4096 bytes 00:11:21.687 Block size: 512 bytes 00:11:21.687 Metadata size: 8 bytes 00:11:21.687 Vector count 1 00:11:21.687 Module: software 00:11:21.687 Queue depth: 32 00:11:21.687 Allocate depth: 32 00:11:21.687 # threads/core: 1 00:11:21.687 Run time: 1 seconds 00:11:21.687 Verify: No 00:11:21.687 00:11:21.687 Running for 1 seconds... 00:11:21.687 00:11:21.687 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:21.687 ------------------------------------------------------------------------------------ 00:11:21.687 0,0 103392/s 410 MiB/s 0 0 00:11:21.687 ==================================================================================== 00:11:21.687 Total 103392/s 403 MiB/s 0 0' 00:11:21.687 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:21.687 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:21.687 16:26:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:21.687 16:26:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:21.687 16:26:58 -- accel/accel.sh@12 -- # build_accel_config 00:11:21.687 16:26:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:21.687 16:26:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:21.687 16:26:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:21.687 16:26:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:21.687 16:26:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:21.687 16:26:58 -- accel/accel.sh@41 -- # local IFS=, 00:11:21.687 16:26:58 -- accel/accel.sh@42 -- # jq -r . 00:11:21.687 [2024-07-11 16:26:58.381050] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:21.687 [2024-07-11 16:26:58.381254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109609 ] 00:11:21.944 [2024-07-11 16:26:58.532483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.944 [2024-07-11 16:26:58.724478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val= 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val= 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val=0x1 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val= 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val= 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val=dif_verify 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val= 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val=software 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@23 -- # accel_module=software 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val=32 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val=32 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val=1 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val=No 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val= 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:22.202 16:26:58 -- accel/accel.sh@21 -- # val= 00:11:22.202 16:26:58 -- accel/accel.sh@22 -- # case "$var" in 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # IFS=: 00:11:22.202 16:26:58 -- accel/accel.sh@20 -- # read -r var val 00:11:24.099 16:27:00 -- accel/accel.sh@21 -- # val= 00:11:24.099 16:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # IFS=: 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # read -r var val 00:11:24.099 16:27:00 -- accel/accel.sh@21 -- # val= 00:11:24.099 16:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # IFS=: 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # read -r var val 00:11:24.099 16:27:00 -- accel/accel.sh@21 -- # val= 00:11:24.099 16:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # IFS=: 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # read -r var val 00:11:24.099 16:27:00 -- accel/accel.sh@21 -- # val= 00:11:24.099 16:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # IFS=: 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # read -r var val 00:11:24.099 16:27:00 -- accel/accel.sh@21 -- # val= 00:11:24.099 16:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # IFS=: 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # read -r var val 00:11:24.099 16:27:00 -- accel/accel.sh@21 -- # val= 00:11:24.099 16:27:00 -- accel/accel.sh@22 -- # case "$var" in 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # IFS=: 00:11:24.099 16:27:00 -- accel/accel.sh@20 -- # read -r var val 00:11:24.099 16:27:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:24.099 16:27:00 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:11:24.099 ************************************ 00:11:24.099 END TEST accel_dif_verify 00:11:24.099 ************************************ 00:11:24.099 16:27:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:24.099 00:11:24.099 real 0m4.605s 00:11:24.099 user 0m4.131s 00:11:24.099 sys 0m0.339s 00:11:24.099 16:27:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:24.099 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:11:24.099 16:27:00 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:24.099 16:27:00 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:24.099 16:27:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:24.099 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:11:24.099 ************************************ 00:11:24.099 START TEST accel_dif_generate 00:11:24.099 ************************************ 00:11:24.099 16:27:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:11:24.099 16:27:00 -- accel/accel.sh@16 -- # local accel_opc 00:11:24.099 16:27:00 -- accel/accel.sh@17 -- # local accel_module 00:11:24.099 16:27:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:11:24.099 16:27:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:24.099 16:27:00 -- accel/accel.sh@12 -- # build_accel_config 00:11:24.099 16:27:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:24.099 16:27:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:24.099 16:27:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:24.099 16:27:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:24.099 16:27:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:24.099 16:27:00 -- accel/accel.sh@41 -- # local IFS=, 00:11:24.099 16:27:00 -- accel/accel.sh@42 -- # jq -r . 00:11:24.099 [2024-07-11 16:27:00.741522] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:24.099 [2024-07-11 16:27:00.741711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109656 ] 00:11:24.357 [2024-07-11 16:27:00.906760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.357 [2024-07-11 16:27:01.108422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.258 16:27:03 -- accel/accel.sh@18 -- # out=' 00:11:26.258 SPDK Configuration: 00:11:26.258 Core mask: 0x1 00:11:26.258 00:11:26.258 Accel Perf Configuration: 00:11:26.258 Workload Type: dif_generate 00:11:26.258 Vector size: 4096 bytes 00:11:26.258 Transfer size: 4096 bytes 00:11:26.258 Block size: 512 bytes 00:11:26.258 Metadata size: 8 bytes 00:11:26.258 Vector count 1 00:11:26.258 Module: software 00:11:26.258 Queue depth: 32 00:11:26.258 Allocate depth: 32 00:11:26.258 # threads/core: 1 00:11:26.258 Run time: 1 seconds 00:11:26.258 Verify: No 00:11:26.258 00:11:26.258 Running for 1 seconds... 00:11:26.258 00:11:26.258 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:26.258 ------------------------------------------------------------------------------------ 00:11:26.258 0,0 136608/s 541 MiB/s 0 0 00:11:26.258 ==================================================================================== 00:11:26.258 Total 136608/s 533 MiB/s 0 0' 00:11:26.258 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:26.258 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:26.258 16:27:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:26.258 16:27:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:26.258 16:27:03 -- accel/accel.sh@12 -- # build_accel_config 00:11:26.258 16:27:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:26.258 16:27:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:26.258 16:27:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:26.258 16:27:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:26.258 16:27:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:26.258 16:27:03 -- accel/accel.sh@41 -- # local IFS=, 00:11:26.258 16:27:03 -- accel/accel.sh@42 -- # jq -r . 00:11:26.258 [2024-07-11 16:27:03.065493] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:26.258 [2024-07-11 16:27:03.065691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109691 ] 00:11:26.516 [2024-07-11 16:27:03.228933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.774 [2024-07-11 16:27:03.412650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val= 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val= 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val=0x1 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val= 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val= 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val=dif_generate 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val='512 bytes' 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val='8 bytes' 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val= 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val=software 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@23 -- # accel_module=software 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val=32 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val=32 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val=1 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val=No 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val= 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:27.032 16:27:03 -- accel/accel.sh@21 -- # val= 00:11:27.032 16:27:03 -- accel/accel.sh@22 -- # case "$var" in 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # IFS=: 00:11:27.032 16:27:03 -- accel/accel.sh@20 -- # read -r var val 00:11:28.934 16:27:05 -- accel/accel.sh@21 -- # val= 00:11:28.934 16:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # IFS=: 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # read -r var val 00:11:28.934 16:27:05 -- accel/accel.sh@21 -- # val= 00:11:28.934 16:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # IFS=: 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # read -r var val 00:11:28.934 16:27:05 -- accel/accel.sh@21 -- # val= 00:11:28.934 16:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # IFS=: 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # read -r var val 00:11:28.934 16:27:05 -- accel/accel.sh@21 -- # val= 00:11:28.934 16:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # IFS=: 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # read -r var val 00:11:28.934 16:27:05 -- accel/accel.sh@21 -- # val= 00:11:28.934 16:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # IFS=: 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # read -r var val 00:11:28.934 16:27:05 -- accel/accel.sh@21 -- # val= 00:11:28.934 16:27:05 -- accel/accel.sh@22 -- # case "$var" in 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # IFS=: 00:11:28.934 16:27:05 -- accel/accel.sh@20 -- # read -r var val 00:11:28.934 16:27:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:28.934 16:27:05 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:11:28.934 16:27:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:28.934 00:11:28.934 real 0m4.636s 00:11:28.934 user 0m4.150s 00:11:28.934 sys 0m0.320s 00:11:28.934 16:27:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:28.934 ************************************ 00:11:28.934 END TEST accel_dif_generate 00:11:28.934 ************************************ 00:11:28.934 16:27:05 -- common/autotest_common.sh@10 -- # set +x 00:11:28.934 16:27:05 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:28.934 16:27:05 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:28.934 16:27:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:28.934 16:27:05 -- common/autotest_common.sh@10 -- # set +x 00:11:28.934 ************************************ 00:11:28.934 START TEST accel_dif_generate_copy 00:11:28.934 ************************************ 00:11:28.934 16:27:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:11:28.934 16:27:05 -- accel/accel.sh@16 -- # local accel_opc 00:11:28.934 16:27:05 -- accel/accel.sh@17 -- # local accel_module 00:11:28.934 16:27:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:11:28.934 16:27:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:28.934 16:27:05 -- accel/accel.sh@12 -- # build_accel_config 00:11:28.934 16:27:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:28.934 16:27:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:28.934 16:27:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:28.934 16:27:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:28.934 16:27:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:28.934 16:27:05 -- accel/accel.sh@41 -- # local IFS=, 00:11:28.934 16:27:05 -- accel/accel.sh@42 -- # jq -r . 00:11:28.934 [2024-07-11 16:27:05.426748] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:28.934 [2024-07-11 16:27:05.426936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109751 ] 00:11:28.934 [2024-07-11 16:27:05.594121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.192 [2024-07-11 16:27:05.790116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.718 16:27:07 -- accel/accel.sh@18 -- # out=' 00:11:31.718 SPDK Configuration: 00:11:31.718 Core mask: 0x1 00:11:31.718 00:11:31.718 Accel Perf Configuration: 00:11:31.718 Workload Type: dif_generate_copy 00:11:31.718 Vector size: 4096 bytes 00:11:31.718 Transfer size: 4096 bytes 00:11:31.718 Vector count 1 00:11:31.718 Module: software 00:11:31.719 Queue depth: 32 00:11:31.719 Allocate depth: 32 00:11:31.719 # threads/core: 1 00:11:31.719 Run time: 1 seconds 00:11:31.719 Verify: No 00:11:31.719 00:11:31.719 Running for 1 seconds... 00:11:31.719 00:11:31.719 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:31.719 ------------------------------------------------------------------------------------ 00:11:31.719 0,0 88448/s 350 MiB/s 0 0 00:11:31.719 ==================================================================================== 00:11:31.719 Total 88448/s 345 MiB/s 0 0' 00:11:31.719 16:27:07 -- accel/accel.sh@20 -- # IFS=: 00:11:31.719 16:27:07 -- accel/accel.sh@20 -- # read -r var val 00:11:31.719 16:27:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:31.719 16:27:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:31.719 16:27:07 -- accel/accel.sh@12 -- # build_accel_config 00:11:31.719 16:27:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:31.719 16:27:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:31.719 16:27:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:31.719 16:27:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:31.719 16:27:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:31.719 16:27:07 -- accel/accel.sh@41 -- # local IFS=, 00:11:31.719 16:27:07 -- accel/accel.sh@42 -- # jq -r . 00:11:31.719 [2024-07-11 16:27:07.951432] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:31.719 [2024-07-11 16:27:07.951603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109804 ] 00:11:31.719 [2024-07-11 16:27:08.109564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.719 [2024-07-11 16:27:08.362928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val= 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val= 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val=0x1 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val= 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val= 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val= 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val=software 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@23 -- # accel_module=software 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val=32 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val=32 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val=1 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val=No 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val= 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:31.977 16:27:08 -- accel/accel.sh@21 -- # val= 00:11:31.977 16:27:08 -- accel/accel.sh@22 -- # case "$var" in 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # IFS=: 00:11:31.977 16:27:08 -- accel/accel.sh@20 -- # read -r var val 00:11:33.872 16:27:10 -- accel/accel.sh@21 -- # val= 00:11:33.872 16:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # IFS=: 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # read -r var val 00:11:33.873 16:27:10 -- accel/accel.sh@21 -- # val= 00:11:33.873 16:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # IFS=: 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # read -r var val 00:11:33.873 16:27:10 -- accel/accel.sh@21 -- # val= 00:11:33.873 16:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # IFS=: 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # read -r var val 00:11:33.873 16:27:10 -- accel/accel.sh@21 -- # val= 00:11:33.873 16:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # IFS=: 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # read -r var val 00:11:33.873 16:27:10 -- accel/accel.sh@21 -- # val= 00:11:33.873 16:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # IFS=: 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # read -r var val 00:11:33.873 16:27:10 -- accel/accel.sh@21 -- # val= 00:11:33.873 16:27:10 -- accel/accel.sh@22 -- # case "$var" in 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # IFS=: 00:11:33.873 16:27:10 -- accel/accel.sh@20 -- # read -r var val 00:11:33.873 16:27:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:33.873 16:27:10 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:11:33.873 16:27:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:33.873 00:11:33.873 real 0m5.109s 00:11:33.873 user 0m4.604s 00:11:33.873 sys 0m0.355s 00:11:33.873 16:27:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.873 16:27:10 -- common/autotest_common.sh@10 -- # set +x 00:11:33.873 ************************************ 00:11:33.873 END TEST accel_dif_generate_copy 00:11:33.873 ************************************ 00:11:33.873 16:27:10 -- accel/accel.sh@107 -- # [[ y == y ]] 00:11:33.873 16:27:10 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:33.873 16:27:10 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:33.873 16:27:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:33.873 16:27:10 -- common/autotest_common.sh@10 -- # set +x 00:11:33.873 ************************************ 00:11:33.873 START TEST accel_comp 00:11:33.873 ************************************ 00:11:33.873 16:27:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:33.873 16:27:10 -- accel/accel.sh@16 -- # local accel_opc 00:11:33.873 16:27:10 -- accel/accel.sh@17 -- # local accel_module 00:11:33.873 16:27:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:33.873 16:27:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:33.873 16:27:10 -- accel/accel.sh@12 -- # build_accel_config 00:11:33.873 16:27:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:33.873 16:27:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:33.873 16:27:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:33.873 16:27:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:33.873 16:27:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:33.873 16:27:10 -- accel/accel.sh@41 -- # local IFS=, 00:11:33.873 16:27:10 -- accel/accel.sh@42 -- # jq -r . 00:11:33.873 [2024-07-11 16:27:10.589647] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:33.873 [2024-07-11 16:27:10.590357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109860 ] 00:11:34.135 [2024-07-11 16:27:10.759564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.407 [2024-07-11 16:27:10.978368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.316 16:27:13 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:36.317 00:11:36.317 SPDK Configuration: 00:11:36.317 Core mask: 0x1 00:11:36.317 00:11:36.317 Accel Perf Configuration: 00:11:36.317 Workload Type: compress 00:11:36.317 Transfer size: 4096 bytes 00:11:36.317 Vector count 1 00:11:36.317 Module: software 00:11:36.317 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:36.317 Queue depth: 32 00:11:36.317 Allocate depth: 32 00:11:36.317 # threads/core: 1 00:11:36.317 Run time: 1 seconds 00:11:36.317 Verify: No 00:11:36.317 00:11:36.317 Running for 1 seconds... 00:11:36.317 00:11:36.317 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:36.317 ------------------------------------------------------------------------------------ 00:11:36.317 0,0 44320/s 184 MiB/s 0 0 00:11:36.317 ==================================================================================== 00:11:36.317 Total 44320/s 173 MiB/s 0 0' 00:11:36.317 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:36.317 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:36.317 16:27:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:36.317 16:27:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:36.317 16:27:13 -- accel/accel.sh@12 -- # build_accel_config 00:11:36.317 16:27:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:36.317 16:27:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:36.317 16:27:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:36.317 16:27:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:36.317 16:27:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:36.317 16:27:13 -- accel/accel.sh@41 -- # local IFS=, 00:11:36.317 16:27:13 -- accel/accel.sh@42 -- # jq -r . 00:11:36.317 [2024-07-11 16:27:13.083246] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:36.317 [2024-07-11 16:27:13.083474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109894 ] 00:11:36.574 [2024-07-11 16:27:13.250065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.831 [2024-07-11 16:27:13.484559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val= 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val= 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val= 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val=0x1 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val= 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val= 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val=compress 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@24 -- # accel_opc=compress 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val= 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val=software 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@23 -- # accel_module=software 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val=32 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val=32 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val=1 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val=No 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val= 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:37.089 16:27:13 -- accel/accel.sh@21 -- # val= 00:11:37.089 16:27:13 -- accel/accel.sh@22 -- # case "$var" in 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # IFS=: 00:11:37.089 16:27:13 -- accel/accel.sh@20 -- # read -r var val 00:11:38.988 16:27:15 -- accel/accel.sh@21 -- # val= 00:11:38.989 16:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # IFS=: 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # read -r var val 00:11:38.989 16:27:15 -- accel/accel.sh@21 -- # val= 00:11:38.989 16:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # IFS=: 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # read -r var val 00:11:38.989 16:27:15 -- accel/accel.sh@21 -- # val= 00:11:38.989 16:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # IFS=: 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # read -r var val 00:11:38.989 16:27:15 -- accel/accel.sh@21 -- # val= 00:11:38.989 16:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # IFS=: 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # read -r var val 00:11:38.989 16:27:15 -- accel/accel.sh@21 -- # val= 00:11:38.989 16:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # IFS=: 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # read -r var val 00:11:38.989 16:27:15 -- accel/accel.sh@21 -- # val= 00:11:38.989 16:27:15 -- accel/accel.sh@22 -- # case "$var" in 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # IFS=: 00:11:38.989 16:27:15 -- accel/accel.sh@20 -- # read -r var val 00:11:38.989 16:27:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:38.989 16:27:15 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:11:38.989 16:27:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:38.989 00:11:38.989 real 0m5.125s 00:11:38.989 user 0m4.602s 00:11:38.989 sys 0m0.371s 00:11:38.989 16:27:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.989 ************************************ 00:11:38.989 END TEST accel_comp 00:11:38.989 16:27:15 -- common/autotest_common.sh@10 -- # set +x 00:11:38.989 ************************************ 00:11:38.989 16:27:15 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:38.989 16:27:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:38.989 16:27:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:38.989 16:27:15 -- common/autotest_common.sh@10 -- # set +x 00:11:38.989 ************************************ 00:11:38.989 START TEST accel_decomp 00:11:38.989 ************************************ 00:11:38.989 16:27:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:38.989 16:27:15 -- accel/accel.sh@16 -- # local accel_opc 00:11:38.989 16:27:15 -- accel/accel.sh@17 -- # local accel_module 00:11:38.989 16:27:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:38.989 16:27:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:38.989 16:27:15 -- accel/accel.sh@12 -- # build_accel_config 00:11:38.989 16:27:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:38.989 16:27:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:38.989 16:27:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:38.989 16:27:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:38.989 16:27:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:38.989 16:27:15 -- accel/accel.sh@41 -- # local IFS=, 00:11:38.989 16:27:15 -- accel/accel.sh@42 -- # jq -r . 00:11:38.989 [2024-07-11 16:27:15.763263] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:38.989 [2024-07-11 16:27:15.763456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109953 ] 00:11:39.247 [2024-07-11 16:27:15.930752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:39.505 [2024-07-11 16:27:16.168458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.405 16:27:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:41.405 00:11:41.405 SPDK Configuration: 00:11:41.405 Core mask: 0x1 00:11:41.405 00:11:41.405 Accel Perf Configuration: 00:11:41.405 Workload Type: decompress 00:11:41.405 Transfer size: 4096 bytes 00:11:41.405 Vector count 1 00:11:41.405 Module: software 00:11:41.405 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:41.405 Queue depth: 32 00:11:41.405 Allocate depth: 32 00:11:41.405 # threads/core: 1 00:11:41.405 Run time: 1 seconds 00:11:41.405 Verify: Yes 00:11:41.405 00:11:41.405 Running for 1 seconds... 00:11:41.405 00:11:41.405 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:41.405 ------------------------------------------------------------------------------------ 00:11:41.405 0,0 71136/s 131 MiB/s 0 0 00:11:41.405 ==================================================================================== 00:11:41.405 Total 71136/s 277 MiB/s 0 0' 00:11:41.405 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.405 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.405 16:27:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:41.405 16:27:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:41.405 16:27:18 -- accel/accel.sh@12 -- # build_accel_config 00:11:41.405 16:27:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:41.405 16:27:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:41.405 16:27:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:41.405 16:27:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:41.405 16:27:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:41.405 16:27:18 -- accel/accel.sh@41 -- # local IFS=, 00:11:41.405 16:27:18 -- accel/accel.sh@42 -- # jq -r . 00:11:41.405 [2024-07-11 16:27:18.146454] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:41.405 [2024-07-11 16:27:18.146653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110009 ] 00:11:41.663 [2024-07-11 16:27:18.314558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.921 [2024-07-11 16:27:18.508145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val= 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val= 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val= 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val=0x1 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val= 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val= 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val=decompress 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val= 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val=software 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@23 -- # accel_module=software 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val=32 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val=32 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val=1 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val=Yes 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val= 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:41.921 16:27:18 -- accel/accel.sh@21 -- # val= 00:11:41.921 16:27:18 -- accel/accel.sh@22 -- # case "$var" in 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # IFS=: 00:11:41.921 16:27:18 -- accel/accel.sh@20 -- # read -r var val 00:11:43.821 16:27:20 -- accel/accel.sh@21 -- # val= 00:11:43.821 16:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.821 16:27:20 -- accel/accel.sh@20 -- # IFS=: 00:11:43.821 16:27:20 -- accel/accel.sh@20 -- # read -r var val 00:11:43.821 16:27:20 -- accel/accel.sh@21 -- # val= 00:11:43.821 16:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.821 16:27:20 -- accel/accel.sh@20 -- # IFS=: 00:11:43.821 16:27:20 -- accel/accel.sh@20 -- # read -r var val 00:11:43.822 16:27:20 -- accel/accel.sh@21 -- # val= 00:11:43.822 16:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.822 16:27:20 -- accel/accel.sh@20 -- # IFS=: 00:11:43.822 16:27:20 -- accel/accel.sh@20 -- # read -r var val 00:11:43.822 16:27:20 -- accel/accel.sh@21 -- # val= 00:11:43.822 16:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.822 16:27:20 -- accel/accel.sh@20 -- # IFS=: 00:11:43.822 16:27:20 -- accel/accel.sh@20 -- # read -r var val 00:11:43.822 16:27:20 -- accel/accel.sh@21 -- # val= 00:11:43.822 16:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.822 16:27:20 -- accel/accel.sh@20 -- # IFS=: 00:11:43.822 16:27:20 -- accel/accel.sh@20 -- # read -r var val 00:11:43.822 16:27:20 -- accel/accel.sh@21 -- # val= 00:11:43.822 16:27:20 -- accel/accel.sh@22 -- # case "$var" in 00:11:43.822 16:27:20 -- accel/accel.sh@20 -- # IFS=: 00:11:43.822 16:27:20 -- accel/accel.sh@20 -- # read -r var val 00:11:43.822 16:27:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:43.822 16:27:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:43.822 16:27:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:43.822 00:11:43.822 real 0m4.698s 00:11:43.822 user 0m4.209s 00:11:43.822 sys 0m0.340s 00:11:43.822 16:27:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.822 16:27:20 -- common/autotest_common.sh@10 -- # set +x 00:11:43.822 ************************************ 00:11:43.822 END TEST accel_decomp 00:11:43.822 ************************************ 00:11:43.822 16:27:20 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:43.822 16:27:20 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:43.822 16:27:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:43.822 16:27:20 -- common/autotest_common.sh@10 -- # set +x 00:11:43.822 ************************************ 00:11:43.822 START TEST accel_decmop_full 00:11:43.822 ************************************ 00:11:43.822 16:27:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:43.822 16:27:20 -- accel/accel.sh@16 -- # local accel_opc 00:11:43.822 16:27:20 -- accel/accel.sh@17 -- # local accel_module 00:11:43.822 16:27:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:43.822 16:27:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:43.822 16:27:20 -- accel/accel.sh@12 -- # build_accel_config 00:11:43.822 16:27:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:43.822 16:27:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:43.822 16:27:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:43.822 16:27:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:43.822 16:27:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:43.822 16:27:20 -- accel/accel.sh@41 -- # local IFS=, 00:11:43.822 16:27:20 -- accel/accel.sh@42 -- # jq -r . 00:11:43.822 [2024-07-11 16:27:20.512071] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:43.822 [2024-07-11 16:27:20.512281] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110056 ] 00:11:44.080 [2024-07-11 16:27:20.678927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.080 [2024-07-11 16:27:20.863261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.605 16:27:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:46.605 00:11:46.605 SPDK Configuration: 00:11:46.605 Core mask: 0x1 00:11:46.605 00:11:46.605 Accel Perf Configuration: 00:11:46.605 Workload Type: decompress 00:11:46.605 Transfer size: 111250 bytes 00:11:46.605 Vector count 1 00:11:46.605 Module: software 00:11:46.605 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:46.605 Queue depth: 32 00:11:46.605 Allocate depth: 32 00:11:46.605 # threads/core: 1 00:11:46.605 Run time: 1 seconds 00:11:46.605 Verify: Yes 00:11:46.605 00:11:46.605 Running for 1 seconds... 00:11:46.605 00:11:46.605 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:46.605 ------------------------------------------------------------------------------------ 00:11:46.605 0,0 5344/s 220 MiB/s 0 0 00:11:46.605 ==================================================================================== 00:11:46.605 Total 5344/s 566 MiB/s 0 0' 00:11:46.605 16:27:22 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:22 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:46.605 16:27:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:46.605 16:27:22 -- accel/accel.sh@12 -- # build_accel_config 00:11:46.605 16:27:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:46.605 16:27:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:46.605 16:27:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:46.605 16:27:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:46.605 16:27:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:46.605 16:27:22 -- accel/accel.sh@41 -- # local IFS=, 00:11:46.605 16:27:22 -- accel/accel.sh@42 -- # jq -r . 00:11:46.605 [2024-07-11 16:27:22.842009] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:46.605 [2024-07-11 16:27:22.842207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110097 ] 00:11:46.605 [2024-07-11 16:27:23.006799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.605 [2024-07-11 16:27:23.203237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val= 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val= 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val= 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val=0x1 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val= 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val= 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val=decompress 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val= 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val=software 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@23 -- # accel_module=software 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val=32 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val=32 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val=1 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val=Yes 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val= 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:46.605 16:27:23 -- accel/accel.sh@21 -- # val= 00:11:46.605 16:27:23 -- accel/accel.sh@22 -- # case "$var" in 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # IFS=: 00:11:46.605 16:27:23 -- accel/accel.sh@20 -- # read -r var val 00:11:48.498 16:27:25 -- accel/accel.sh@21 -- # val= 00:11:48.498 16:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.498 16:27:25 -- accel/accel.sh@20 -- # IFS=: 00:11:48.498 16:27:25 -- accel/accel.sh@20 -- # read -r var val 00:11:48.498 16:27:25 -- accel/accel.sh@21 -- # val= 00:11:48.498 16:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.498 16:27:25 -- accel/accel.sh@20 -- # IFS=: 00:11:48.498 16:27:25 -- accel/accel.sh@20 -- # read -r var val 00:11:48.498 16:27:25 -- accel/accel.sh@21 -- # val= 00:11:48.498 16:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.498 16:27:25 -- accel/accel.sh@20 -- # IFS=: 00:11:48.498 16:27:25 -- accel/accel.sh@20 -- # read -r var val 00:11:48.498 16:27:25 -- accel/accel.sh@21 -- # val= 00:11:48.498 16:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.499 16:27:25 -- accel/accel.sh@20 -- # IFS=: 00:11:48.499 16:27:25 -- accel/accel.sh@20 -- # read -r var val 00:11:48.499 16:27:25 -- accel/accel.sh@21 -- # val= 00:11:48.499 16:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.499 16:27:25 -- accel/accel.sh@20 -- # IFS=: 00:11:48.499 16:27:25 -- accel/accel.sh@20 -- # read -r var val 00:11:48.499 16:27:25 -- accel/accel.sh@21 -- # val= 00:11:48.499 16:27:25 -- accel/accel.sh@22 -- # case "$var" in 00:11:48.499 16:27:25 -- accel/accel.sh@20 -- # IFS=: 00:11:48.499 16:27:25 -- accel/accel.sh@20 -- # read -r var val 00:11:48.499 ************************************ 00:11:48.499 END TEST accel_decmop_full 00:11:48.499 ************************************ 00:11:48.499 16:27:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:48.499 16:27:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:48.499 16:27:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:48.499 00:11:48.499 real 0m4.664s 00:11:48.499 user 0m4.178s 00:11:48.499 sys 0m0.337s 00:11:48.499 16:27:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.499 16:27:25 -- common/autotest_common.sh@10 -- # set +x 00:11:48.499 16:27:25 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:48.499 16:27:25 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:48.499 16:27:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.499 16:27:25 -- common/autotest_common.sh@10 -- # set +x 00:11:48.499 ************************************ 00:11:48.499 START TEST accel_decomp_mcore 00:11:48.499 ************************************ 00:11:48.499 16:27:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:48.499 16:27:25 -- accel/accel.sh@16 -- # local accel_opc 00:11:48.499 16:27:25 -- accel/accel.sh@17 -- # local accel_module 00:11:48.499 16:27:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:48.499 16:27:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:48.499 16:27:25 -- accel/accel.sh@12 -- # build_accel_config 00:11:48.499 16:27:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:48.499 16:27:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:48.499 16:27:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:48.499 16:27:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:48.499 16:27:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:48.499 16:27:25 -- accel/accel.sh@41 -- # local IFS=, 00:11:48.499 16:27:25 -- accel/accel.sh@42 -- # jq -r . 00:11:48.499 [2024-07-11 16:27:25.230981] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:48.499 [2024-07-11 16:27:25.231356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110144 ] 00:11:48.756 [2024-07-11 16:27:25.415162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.014 [2024-07-11 16:27:25.605756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.014 [2024-07-11 16:27:25.605869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.014 [2024-07-11 16:27:25.605913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:49.014 [2024-07-11 16:27:25.605915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.912 16:27:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:50.912 00:11:50.912 SPDK Configuration: 00:11:50.912 Core mask: 0xf 00:11:50.912 00:11:50.912 Accel Perf Configuration: 00:11:50.912 Workload Type: decompress 00:11:50.912 Transfer size: 4096 bytes 00:11:50.912 Vector count 1 00:11:50.912 Module: software 00:11:50.912 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:50.912 Queue depth: 32 00:11:50.912 Allocate depth: 32 00:11:50.912 # threads/core: 1 00:11:50.912 Run time: 1 seconds 00:11:50.912 Verify: Yes 00:11:50.912 00:11:50.912 Running for 1 seconds... 00:11:50.912 00:11:50.912 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:50.912 ------------------------------------------------------------------------------------ 00:11:50.912 0,0 58784/s 108 MiB/s 0 0 00:11:50.913 3,0 57984/s 106 MiB/s 0 0 00:11:50.913 2,0 58528/s 107 MiB/s 0 0 00:11:50.913 1,0 58816/s 108 MiB/s 0 0 00:11:50.913 ==================================================================================== 00:11:50.913 Total 234112/s 914 MiB/s 0 0' 00:11:50.913 16:27:27 -- accel/accel.sh@20 -- # IFS=: 00:11:50.913 16:27:27 -- accel/accel.sh@20 -- # read -r var val 00:11:50.913 16:27:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:50.913 16:27:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:50.913 16:27:27 -- accel/accel.sh@12 -- # build_accel_config 00:11:50.913 16:27:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:50.913 16:27:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:50.913 16:27:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:50.913 16:27:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:50.913 16:27:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:50.913 16:27:27 -- accel/accel.sh@41 -- # local IFS=, 00:11:50.913 16:27:27 -- accel/accel.sh@42 -- # jq -r . 00:11:50.913 [2024-07-11 16:27:27.662353] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:50.913 [2024-07-11 16:27:27.662828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110211 ] 00:11:51.170 [2024-07-11 16:27:27.847948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.429 [2024-07-11 16:27:28.030541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.429 [2024-07-11 16:27:28.030666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.429 [2024-07-11 16:27:28.030777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.429 [2024-07-11 16:27:28.031056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val= 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val= 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val= 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val=0xf 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val= 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val= 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val=decompress 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val= 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val=software 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@23 -- # accel_module=software 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val=32 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val=32 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val=1 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val=Yes 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val= 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:51.429 16:27:28 -- accel/accel.sh@21 -- # val= 00:11:51.429 16:27:28 -- accel/accel.sh@22 -- # case "$var" in 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # IFS=: 00:11:51.429 16:27:28 -- accel/accel.sh@20 -- # read -r var val 00:11:53.364 16:27:30 -- accel/accel.sh@21 -- # val= 00:11:53.364 16:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # IFS=: 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # read -r var val 00:11:53.364 16:27:30 -- accel/accel.sh@21 -- # val= 00:11:53.364 16:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # IFS=: 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # read -r var val 00:11:53.364 16:27:30 -- accel/accel.sh@21 -- # val= 00:11:53.364 16:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # IFS=: 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # read -r var val 00:11:53.364 16:27:30 -- accel/accel.sh@21 -- # val= 00:11:53.364 16:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # IFS=: 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # read -r var val 00:11:53.364 16:27:30 -- accel/accel.sh@21 -- # val= 00:11:53.364 16:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # IFS=: 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # read -r var val 00:11:53.364 16:27:30 -- accel/accel.sh@21 -- # val= 00:11:53.364 16:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # IFS=: 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # read -r var val 00:11:53.364 16:27:30 -- accel/accel.sh@21 -- # val= 00:11:53.364 16:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # IFS=: 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # read -r var val 00:11:53.364 16:27:30 -- accel/accel.sh@21 -- # val= 00:11:53.364 16:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # IFS=: 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # read -r var val 00:11:53.364 16:27:30 -- accel/accel.sh@21 -- # val= 00:11:53.364 16:27:30 -- accel/accel.sh@22 -- # case "$var" in 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # IFS=: 00:11:53.364 16:27:30 -- accel/accel.sh@20 -- # read -r var val 00:11:53.364 ************************************ 00:11:53.364 END TEST accel_decomp_mcore 00:11:53.364 ************************************ 00:11:53.364 16:27:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:53.364 16:27:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:53.364 16:27:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:53.364 00:11:53.364 real 0m4.959s 00:11:53.364 user 0m14.555s 00:11:53.364 sys 0m0.423s 00:11:53.364 16:27:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.364 16:27:30 -- common/autotest_common.sh@10 -- # set +x 00:11:53.623 16:27:30 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:53.623 16:27:30 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:11:53.623 16:27:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:53.623 16:27:30 -- common/autotest_common.sh@10 -- # set +x 00:11:53.623 ************************************ 00:11:53.623 START TEST accel_decomp_full_mcore 00:11:53.623 ************************************ 00:11:53.623 16:27:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:53.623 16:27:30 -- accel/accel.sh@16 -- # local accel_opc 00:11:53.623 16:27:30 -- accel/accel.sh@17 -- # local accel_module 00:11:53.623 16:27:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:53.623 16:27:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:53.623 16:27:30 -- accel/accel.sh@12 -- # build_accel_config 00:11:53.623 16:27:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:53.623 16:27:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:53.623 16:27:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:53.623 16:27:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:53.623 16:27:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:53.623 16:27:30 -- accel/accel.sh@41 -- # local IFS=, 00:11:53.623 16:27:30 -- accel/accel.sh@42 -- # jq -r . 00:11:53.623 [2024-07-11 16:27:30.230010] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:53.623 [2024-07-11 16:27:30.230309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110262 ] 00:11:53.624 [2024-07-11 16:27:30.407925] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:53.882 [2024-07-11 16:27:30.641309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.882 [2024-07-11 16:27:30.641435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.882 [2024-07-11 16:27:30.641547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.882 [2024-07-11 16:27:30.641549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.414 16:27:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:56.414 00:11:56.414 SPDK Configuration: 00:11:56.414 Core mask: 0xf 00:11:56.414 00:11:56.414 Accel Perf Configuration: 00:11:56.414 Workload Type: decompress 00:11:56.414 Transfer size: 111250 bytes 00:11:56.414 Vector count 1 00:11:56.414 Module: software 00:11:56.414 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:56.414 Queue depth: 32 00:11:56.414 Allocate depth: 32 00:11:56.414 # threads/core: 1 00:11:56.414 Run time: 1 seconds 00:11:56.414 Verify: Yes 00:11:56.414 00:11:56.414 Running for 1 seconds... 00:11:56.414 00:11:56.414 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:56.414 ------------------------------------------------------------------------------------ 00:11:56.414 0,0 4416/s 182 MiB/s 0 0 00:11:56.414 3,0 4384/s 181 MiB/s 0 0 00:11:56.414 2,0 4416/s 182 MiB/s 0 0 00:11:56.414 1,0 4384/s 181 MiB/s 0 0 00:11:56.414 ==================================================================================== 00:11:56.414 Total 17600/s 1867 MiB/s 0 0' 00:11:56.414 16:27:32 -- accel/accel.sh@20 -- # IFS=: 00:11:56.414 16:27:32 -- accel/accel.sh@20 -- # read -r var val 00:11:56.414 16:27:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:56.414 16:27:32 -- accel/accel.sh@12 -- # build_accel_config 00:11:56.414 16:27:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:56.414 16:27:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:56.414 16:27:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:56.414 16:27:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:56.414 16:27:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:56.414 16:27:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:56.414 16:27:32 -- accel/accel.sh@41 -- # local IFS=, 00:11:56.414 16:27:32 -- accel/accel.sh@42 -- # jq -r . 00:11:56.414 [2024-07-11 16:27:32.723370] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:56.414 [2024-07-11 16:27:32.724314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110299 ] 00:11:56.414 [2024-07-11 16:27:32.907719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.414 [2024-07-11 16:27:33.092871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.414 [2024-07-11 16:27:33.093012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.414 [2024-07-11 16:27:33.093139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.414 [2024-07-11 16:27:33.093140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val= 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val= 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val= 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val=0xf 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val= 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val= 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val=decompress 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val= 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val=software 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@23 -- # accel_module=software 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val=32 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val=32 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val=1 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val=Yes 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val= 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:56.671 16:27:33 -- accel/accel.sh@21 -- # val= 00:11:56.671 16:27:33 -- accel/accel.sh@22 -- # case "$var" in 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # IFS=: 00:11:56.671 16:27:33 -- accel/accel.sh@20 -- # read -r var val 00:11:58.569 16:27:35 -- accel/accel.sh@21 -- # val= 00:11:58.569 16:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.569 16:27:35 -- accel/accel.sh@20 -- # IFS=: 00:11:58.569 16:27:35 -- accel/accel.sh@20 -- # read -r var val 00:11:58.569 16:27:35 -- accel/accel.sh@21 -- # val= 00:11:58.569 16:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.569 16:27:35 -- accel/accel.sh@20 -- # IFS=: 00:11:58.569 16:27:35 -- accel/accel.sh@20 -- # read -r var val 00:11:58.569 16:27:35 -- accel/accel.sh@21 -- # val= 00:11:58.569 16:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.569 16:27:35 -- accel/accel.sh@20 -- # IFS=: 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # read -r var val 00:11:58.570 16:27:35 -- accel/accel.sh@21 -- # val= 00:11:58.570 16:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # IFS=: 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # read -r var val 00:11:58.570 16:27:35 -- accel/accel.sh@21 -- # val= 00:11:58.570 16:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # IFS=: 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # read -r var val 00:11:58.570 16:27:35 -- accel/accel.sh@21 -- # val= 00:11:58.570 16:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # IFS=: 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # read -r var val 00:11:58.570 16:27:35 -- accel/accel.sh@21 -- # val= 00:11:58.570 16:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # IFS=: 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # read -r var val 00:11:58.570 16:27:35 -- accel/accel.sh@21 -- # val= 00:11:58.570 16:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # IFS=: 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # read -r var val 00:11:58.570 16:27:35 -- accel/accel.sh@21 -- # val= 00:11:58.570 16:27:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # IFS=: 00:11:58.570 16:27:35 -- accel/accel.sh@20 -- # read -r var val 00:11:58.570 ************************************ 00:11:58.570 END TEST accel_decomp_full_mcore 00:11:58.570 ************************************ 00:11:58.570 16:27:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:58.570 16:27:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:58.570 16:27:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:58.570 00:11:58.570 real 0m4.982s 00:11:58.570 user 0m14.750s 00:11:58.570 sys 0m0.407s 00:11:58.570 16:27:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:58.570 16:27:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.570 16:27:35 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:58.570 16:27:35 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:11:58.570 16:27:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:58.570 16:27:35 -- common/autotest_common.sh@10 -- # set +x 00:11:58.570 ************************************ 00:11:58.570 START TEST accel_decomp_mthread 00:11:58.570 ************************************ 00:11:58.570 16:27:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:58.570 16:27:35 -- accel/accel.sh@16 -- # local accel_opc 00:11:58.570 16:27:35 -- accel/accel.sh@17 -- # local accel_module 00:11:58.570 16:27:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:58.570 16:27:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:58.570 16:27:35 -- accel/accel.sh@12 -- # build_accel_config 00:11:58.570 16:27:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:58.570 16:27:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:58.570 16:27:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:58.570 16:27:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:58.570 16:27:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:58.570 16:27:35 -- accel/accel.sh@41 -- # local IFS=, 00:11:58.570 16:27:35 -- accel/accel.sh@42 -- # jq -r . 00:11:58.570 [2024-07-11 16:27:35.276807] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:58.570 [2024-07-11 16:27:35.277021] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110356 ] 00:11:58.829 [2024-07-11 16:27:35.443167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.088 [2024-07-11 16:27:35.645637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.991 16:27:37 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:00.991 00:12:00.991 SPDK Configuration: 00:12:00.991 Core mask: 0x1 00:12:00.991 00:12:00.991 Accel Perf Configuration: 00:12:00.991 Workload Type: decompress 00:12:00.991 Transfer size: 4096 bytes 00:12:00.991 Vector count 1 00:12:00.991 Module: software 00:12:00.991 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:00.991 Queue depth: 32 00:12:00.991 Allocate depth: 32 00:12:00.991 # threads/core: 2 00:12:00.991 Run time: 1 seconds 00:12:00.991 Verify: Yes 00:12:00.991 00:12:00.991 Running for 1 seconds... 00:12:00.991 00:12:00.991 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:00.991 ------------------------------------------------------------------------------------ 00:12:00.991 0,1 32640/s 60 MiB/s 0 0 00:12:00.991 0,0 32544/s 59 MiB/s 0 0 00:12:00.991 ==================================================================================== 00:12:00.991 Total 65184/s 254 MiB/s 0 0' 00:12:00.991 16:27:37 -- accel/accel.sh@20 -- # IFS=: 00:12:00.991 16:27:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:00.991 16:27:37 -- accel/accel.sh@20 -- # read -r var val 00:12:00.991 16:27:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:00.991 16:27:37 -- accel/accel.sh@12 -- # build_accel_config 00:12:00.991 16:27:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:00.991 16:27:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:00.991 16:27:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:00.991 16:27:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:00.991 16:27:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:00.991 16:27:37 -- accel/accel.sh@41 -- # local IFS=, 00:12:00.991 16:27:37 -- accel/accel.sh@42 -- # jq -r . 00:12:00.991 [2024-07-11 16:27:37.714429] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:00.991 [2024-07-11 16:27:37.714626] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110404 ] 00:12:01.250 [2024-07-11 16:27:37.883411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.507 [2024-07-11 16:27:38.099632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val= 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val= 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val= 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val=0x1 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val= 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val= 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val=decompress 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val= 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val=software 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@23 -- # accel_module=software 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val=32 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val=32 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val=2 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val=Yes 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val= 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:01.507 16:27:38 -- accel/accel.sh@21 -- # val= 00:12:01.507 16:27:38 -- accel/accel.sh@22 -- # case "$var" in 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # IFS=: 00:12:01.507 16:27:38 -- accel/accel.sh@20 -- # read -r var val 00:12:03.409 16:27:40 -- accel/accel.sh@21 -- # val= 00:12:03.409 16:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # IFS=: 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # read -r var val 00:12:03.409 16:27:40 -- accel/accel.sh@21 -- # val= 00:12:03.409 16:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # IFS=: 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # read -r var val 00:12:03.409 16:27:40 -- accel/accel.sh@21 -- # val= 00:12:03.409 16:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # IFS=: 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # read -r var val 00:12:03.409 16:27:40 -- accel/accel.sh@21 -- # val= 00:12:03.409 16:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # IFS=: 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # read -r var val 00:12:03.409 16:27:40 -- accel/accel.sh@21 -- # val= 00:12:03.409 16:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # IFS=: 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # read -r var val 00:12:03.409 16:27:40 -- accel/accel.sh@21 -- # val= 00:12:03.409 16:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # IFS=: 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # read -r var val 00:12:03.409 16:27:40 -- accel/accel.sh@21 -- # val= 00:12:03.409 16:27:40 -- accel/accel.sh@22 -- # case "$var" in 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # IFS=: 00:12:03.409 16:27:40 -- accel/accel.sh@20 -- # read -r var val 00:12:03.409 16:27:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:03.409 16:27:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:03.409 16:27:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:03.409 ************************************ 00:12:03.409 END TEST accel_decomp_mthread 00:12:03.409 ************************************ 00:12:03.409 00:12:03.409 real 0m4.832s 00:12:03.409 user 0m4.319s 00:12:03.409 sys 0m0.350s 00:12:03.409 16:27:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.409 16:27:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.409 16:27:40 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:03.409 16:27:40 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:12:03.409 16:27:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:03.409 16:27:40 -- common/autotest_common.sh@10 -- # set +x 00:12:03.409 ************************************ 00:12:03.409 START TEST accel_deomp_full_mthread 00:12:03.409 ************************************ 00:12:03.409 16:27:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:03.409 16:27:40 -- accel/accel.sh@16 -- # local accel_opc 00:12:03.409 16:27:40 -- accel/accel.sh@17 -- # local accel_module 00:12:03.409 16:27:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:03.409 16:27:40 -- accel/accel.sh@12 -- # build_accel_config 00:12:03.409 16:27:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:03.409 16:27:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:03.409 16:27:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:03.409 16:27:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:03.409 16:27:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:03.409 16:27:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:03.409 16:27:40 -- accel/accel.sh@41 -- # local IFS=, 00:12:03.409 16:27:40 -- accel/accel.sh@42 -- # jq -r . 00:12:03.409 [2024-07-11 16:27:40.159036] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:03.409 [2024-07-11 16:27:40.159235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110466 ] 00:12:03.667 [2024-07-11 16:27:40.326547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.925 [2024-07-11 16:27:40.507042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.828 16:27:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:12:05.828 00:12:05.828 SPDK Configuration: 00:12:05.828 Core mask: 0x1 00:12:05.828 00:12:05.828 Accel Perf Configuration: 00:12:05.828 Workload Type: decompress 00:12:05.828 Transfer size: 111250 bytes 00:12:05.828 Vector count 1 00:12:05.828 Module: software 00:12:05.828 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:05.828 Queue depth: 32 00:12:05.828 Allocate depth: 32 00:12:05.828 # threads/core: 2 00:12:05.828 Run time: 1 seconds 00:12:05.828 Verify: Yes 00:12:05.828 00:12:05.828 Running for 1 seconds... 00:12:05.828 00:12:05.828 Core,Thread Transfers Bandwidth Failed Miscompares 00:12:05.828 ------------------------------------------------------------------------------------ 00:12:05.828 0,1 2688/s 111 MiB/s 0 0 00:12:05.828 0,0 2656/s 109 MiB/s 0 0 00:12:05.828 ==================================================================================== 00:12:05.828 Total 5344/s 566 MiB/s 0 0' 00:12:05.828 16:27:42 -- accel/accel.sh@20 -- # IFS=: 00:12:05.828 16:27:42 -- accel/accel.sh@20 -- # read -r var val 00:12:05.828 16:27:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:05.828 16:27:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:05.828 16:27:42 -- accel/accel.sh@12 -- # build_accel_config 00:12:05.828 16:27:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:05.828 16:27:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:05.828 16:27:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:05.828 16:27:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:05.828 16:27:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:05.828 16:27:42 -- accel/accel.sh@41 -- # local IFS=, 00:12:05.828 16:27:42 -- accel/accel.sh@42 -- # jq -r . 00:12:05.828 [2024-07-11 16:27:42.508953] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:05.828 [2024-07-11 16:27:42.509199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110500 ] 00:12:06.087 [2024-07-11 16:27:42.678125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.087 [2024-07-11 16:27:42.865352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.344 16:27:43 -- accel/accel.sh@21 -- # val= 00:12:06.344 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.344 16:27:43 -- accel/accel.sh@21 -- # val= 00:12:06.344 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.344 16:27:43 -- accel/accel.sh@21 -- # val= 00:12:06.344 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.344 16:27:43 -- accel/accel.sh@21 -- # val=0x1 00:12:06.344 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.344 16:27:43 -- accel/accel.sh@21 -- # val= 00:12:06.344 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.344 16:27:43 -- accel/accel.sh@21 -- # val= 00:12:06.344 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.344 16:27:43 -- accel/accel.sh@21 -- # val=decompress 00:12:06.344 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.344 16:27:43 -- accel/accel.sh@24 -- # accel_opc=decompress 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.344 16:27:43 -- accel/accel.sh@21 -- # val='111250 bytes' 00:12:06.344 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.344 16:27:43 -- accel/accel.sh@21 -- # val= 00:12:06.344 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.344 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.344 16:27:43 -- accel/accel.sh@21 -- # val=software 00:12:06.344 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.344 16:27:43 -- accel/accel.sh@23 -- # accel_module=software 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.345 16:27:43 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:06.345 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.345 16:27:43 -- accel/accel.sh@21 -- # val=32 00:12:06.345 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.345 16:27:43 -- accel/accel.sh@21 -- # val=32 00:12:06.345 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.345 16:27:43 -- accel/accel.sh@21 -- # val=2 00:12:06.345 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.345 16:27:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:12:06.345 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.345 16:27:43 -- accel/accel.sh@21 -- # val=Yes 00:12:06.345 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.345 16:27:43 -- accel/accel.sh@21 -- # val= 00:12:06.345 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:06.345 16:27:43 -- accel/accel.sh@21 -- # val= 00:12:06.345 16:27:43 -- accel/accel.sh@22 -- # case "$var" in 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # IFS=: 00:12:06.345 16:27:43 -- accel/accel.sh@20 -- # read -r var val 00:12:08.260 16:27:44 -- accel/accel.sh@21 -- # val= 00:12:08.260 16:27:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # IFS=: 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # read -r var val 00:12:08.260 16:27:44 -- accel/accel.sh@21 -- # val= 00:12:08.260 16:27:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # IFS=: 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # read -r var val 00:12:08.260 16:27:44 -- accel/accel.sh@21 -- # val= 00:12:08.260 16:27:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # IFS=: 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # read -r var val 00:12:08.260 16:27:44 -- accel/accel.sh@21 -- # val= 00:12:08.260 16:27:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # IFS=: 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # read -r var val 00:12:08.260 16:27:44 -- accel/accel.sh@21 -- # val= 00:12:08.260 16:27:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # IFS=: 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # read -r var val 00:12:08.260 16:27:44 -- accel/accel.sh@21 -- # val= 00:12:08.260 16:27:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # IFS=: 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # read -r var val 00:12:08.260 16:27:44 -- accel/accel.sh@21 -- # val= 00:12:08.260 16:27:44 -- accel/accel.sh@22 -- # case "$var" in 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # IFS=: 00:12:08.260 16:27:44 -- accel/accel.sh@20 -- # read -r var val 00:12:08.260 16:27:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:12:08.260 16:27:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:12:08.260 16:27:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:08.260 00:12:08.260 real 0m4.731s 00:12:08.260 user 0m4.269s 00:12:08.260 sys 0m0.326s 00:12:08.260 16:27:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:08.260 16:27:44 -- common/autotest_common.sh@10 -- # set +x 00:12:08.260 ************************************ 00:12:08.260 END TEST accel_deomp_full_mthread 00:12:08.260 ************************************ 00:12:08.260 16:27:44 -- accel/accel.sh@116 -- # [[ n == y ]] 00:12:08.261 16:27:44 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:08.261 16:27:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:08.261 16:27:44 -- accel/accel.sh@129 -- # build_accel_config 00:12:08.261 16:27:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:08.261 16:27:44 -- common/autotest_common.sh@10 -- # set +x 00:12:08.261 16:27:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:12:08.261 16:27:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:08.261 16:27:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:08.261 16:27:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:12:08.261 16:27:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:12:08.261 16:27:44 -- accel/accel.sh@41 -- # local IFS=, 00:12:08.261 16:27:44 -- accel/accel.sh@42 -- # jq -r . 00:12:08.261 ************************************ 00:12:08.261 START TEST accel_dif_functional_tests 00:12:08.261 ************************************ 00:12:08.261 16:27:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:08.261 [2024-07-11 16:27:44.984960] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:08.261 [2024-07-11 16:27:44.985174] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110547 ] 00:12:08.519 [2024-07-11 16:27:45.163982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.777 [2024-07-11 16:27:45.342892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.777 [2024-07-11 16:27:45.343031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.777 [2024-07-11 16:27:45.343031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.036 00:12:09.036 00:12:09.036 CUnit - A unit testing framework for C - Version 2.1-3 00:12:09.036 http://cunit.sourceforge.net/ 00:12:09.036 00:12:09.036 00:12:09.036 Suite: accel_dif 00:12:09.036 Test: verify: DIF generated, GUARD check ...passed 00:12:09.036 Test: verify: DIF generated, APPTAG check ...passed 00:12:09.036 Test: verify: DIF generated, REFTAG check ...passed 00:12:09.036 Test: verify: DIF not generated, GUARD check ...[2024-07-11 16:27:45.632510] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:09.036 [2024-07-11 16:27:45.632690] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:09.036 passed 00:12:09.036 Test: verify: DIF not generated, APPTAG check ...passed 00:12:09.036 Test: verify: DIF not generated, REFTAG check ...[2024-07-11 16:27:45.632816] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:09.036 [2024-07-11 16:27:45.632880] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:09.036 [2024-07-11 16:27:45.632981] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:09.036 [2024-07-11 16:27:45.633053] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:09.036 passed 00:12:09.036 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:09.036 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:12:09.036 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:09.036 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:09.036 Test: verify: REFTAG_INIT correct, REFTAG check ...[2024-07-11 16:27:45.633220] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:09.036 passed 00:12:09.036 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:12:09.036 Test: generate copy: DIF generated, GUARD check ...passed 00:12:09.036 Test: generate copy: DIF generated, APTTAG check ...[2024-07-11 16:27:45.633479] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:09.036 passed 00:12:09.036 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:09.036 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:09.036 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:09.036 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:09.036 Test: generate copy: iovecs-len validate ...passed 00:12:09.036 Test: generate copy: buffer alignment validate ...passed 00:12:09.036 00:12:09.036 Run Summary: Type Total Ran Passed Failed Inactive 00:12:09.036 suites 1 1 n/a 0 0 00:12:09.036 tests 20 20 20 0 0 00:12:09.036 asserts 204 204 204 0 n/a 00:12:09.036 00:12:09.036 Elapsed time = 0.009 seconds 00:12:09.036 [2024-07-11 16:27:45.633942] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:09.970 00:12:09.970 real 0m1.728s 00:12:09.970 user 0m3.274s 00:12:09.970 sys 0m0.267s 00:12:09.970 16:27:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.970 ************************************ 00:12:09.970 END TEST accel_dif_functional_tests 00:12:09.970 ************************************ 00:12:09.970 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:12:09.970 00:12:09.970 real 1m44.425s 00:12:09.970 user 1m55.616s 00:12:09.970 sys 0m8.779s 00:12:09.970 ************************************ 00:12:09.970 END TEST accel 00:12:09.970 16:27:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.970 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:12:09.970 ************************************ 00:12:09.970 16:27:46 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:09.970 16:27:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:09.970 16:27:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:09.970 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:12:09.970 ************************************ 00:12:09.970 START TEST accel_rpc 00:12:09.970 ************************************ 00:12:09.970 16:27:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:09.970 * Looking for test storage... 00:12:10.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:10.229 16:27:46 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:10.229 16:27:46 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=110637 00:12:10.229 16:27:46 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:10.229 16:27:46 -- accel/accel_rpc.sh@15 -- # waitforlisten 110637 00:12:10.229 16:27:46 -- common/autotest_common.sh@819 -- # '[' -z 110637 ']' 00:12:10.229 16:27:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.229 16:27:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:10.229 16:27:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.229 16:27:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:10.229 16:27:46 -- common/autotest_common.sh@10 -- # set +x 00:12:10.229 [2024-07-11 16:27:46.858557] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:10.229 [2024-07-11 16:27:46.858944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110637 ] 00:12:10.229 [2024-07-11 16:27:47.027772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.487 [2024-07-11 16:27:47.206394] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:10.487 [2024-07-11 16:27:47.206938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.054 16:27:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:11.054 16:27:47 -- common/autotest_common.sh@852 -- # return 0 00:12:11.054 16:27:47 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:11.054 16:27:47 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:11.054 16:27:47 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:11.054 16:27:47 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:11.054 16:27:47 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:11.054 16:27:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:11.054 16:27:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:11.054 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:12:11.054 ************************************ 00:12:11.054 START TEST accel_assign_opcode 00:12:11.054 ************************************ 00:12:11.054 16:27:47 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:12:11.054 16:27:47 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:11.054 16:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:11.054 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:12:11.054 [2024-07-11 16:27:47.812606] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:11.054 16:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.054 16:27:47 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:11.054 16:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:11.054 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:12:11.054 [2024-07-11 16:27:47.820624] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:11.054 16:27:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.054 16:27:47 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:11.054 16:27:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:11.054 16:27:47 -- common/autotest_common.sh@10 -- # set +x 00:12:11.990 16:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.990 16:27:48 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:11.990 16:27:48 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:11.990 16:27:48 -- accel/accel_rpc.sh@42 -- # grep software 00:12:11.990 16:27:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:11.990 16:27:48 -- common/autotest_common.sh@10 -- # set +x 00:12:11.990 16:27:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.990 software 00:12:11.990 00:12:11.990 real 0m0.721s 00:12:11.990 user 0m0.067s 00:12:11.990 sys 0m0.000s 00:12:11.990 ************************************ 00:12:11.990 END TEST accel_assign_opcode 00:12:11.990 ************************************ 00:12:11.990 16:27:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.990 16:27:48 -- common/autotest_common.sh@10 -- # set +x 00:12:11.990 16:27:48 -- accel/accel_rpc.sh@55 -- # killprocess 110637 00:12:11.990 16:27:48 -- common/autotest_common.sh@926 -- # '[' -z 110637 ']' 00:12:11.990 16:27:48 -- common/autotest_common.sh@930 -- # kill -0 110637 00:12:11.990 16:27:48 -- common/autotest_common.sh@931 -- # uname 00:12:11.990 16:27:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:11.990 16:27:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110637 00:12:11.990 16:27:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:11.990 16:27:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:11.990 killing process with pid 110637 00:12:11.990 16:27:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110637' 00:12:11.990 16:27:48 -- common/autotest_common.sh@945 -- # kill 110637 00:12:11.990 16:27:48 -- common/autotest_common.sh@950 -- # wait 110637 00:12:13.898 ************************************ 00:12:13.898 END TEST accel_rpc 00:12:13.898 ************************************ 00:12:13.898 00:12:13.898 real 0m3.735s 00:12:13.898 user 0m3.806s 00:12:13.898 sys 0m0.444s 00:12:13.898 16:27:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:13.898 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:12:13.898 16:27:50 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:13.898 16:27:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:13.898 16:27:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:13.898 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:12:13.898 ************************************ 00:12:13.898 START TEST app_cmdline 00:12:13.898 ************************************ 00:12:13.898 16:27:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:13.898 * Looking for test storage... 00:12:13.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:13.898 16:27:50 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:13.898 16:27:50 -- app/cmdline.sh@17 -- # spdk_tgt_pid=110781 00:12:13.898 16:27:50 -- app/cmdline.sh@18 -- # waitforlisten 110781 00:12:13.898 16:27:50 -- common/autotest_common.sh@819 -- # '[' -z 110781 ']' 00:12:13.898 16:27:50 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:13.898 16:27:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.898 16:27:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:13.898 16:27:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.898 16:27:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:13.898 16:27:50 -- common/autotest_common.sh@10 -- # set +x 00:12:13.898 [2024-07-11 16:27:50.645854] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:13.898 [2024-07-11 16:27:50.646283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110781 ] 00:12:14.157 [2024-07-11 16:27:50.814615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.415 [2024-07-11 16:27:51.003199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:14.415 [2024-07-11 16:27:51.003622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.792 16:27:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:15.792 16:27:52 -- common/autotest_common.sh@852 -- # return 0 00:12:15.792 16:27:52 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:15.792 { 00:12:15.792 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:12:15.792 "fields": { 00:12:15.792 "major": 24, 00:12:15.792 "minor": 1, 00:12:15.792 "patch": 1, 00:12:15.792 "suffix": "-pre", 00:12:15.792 "commit": "4b94202c6" 00:12:15.792 } 00:12:15.792 } 00:12:15.792 16:27:52 -- app/cmdline.sh@22 -- # expected_methods=() 00:12:15.792 16:27:52 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:15.792 16:27:52 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:15.792 16:27:52 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:15.792 16:27:52 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:15.792 16:27:52 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:15.792 16:27:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:15.792 16:27:52 -- app/cmdline.sh@26 -- # sort 00:12:15.792 16:27:52 -- common/autotest_common.sh@10 -- # set +x 00:12:15.792 16:27:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.051 16:27:52 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:16.051 16:27:52 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:16.051 16:27:52 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:16.051 16:27:52 -- common/autotest_common.sh@640 -- # local es=0 00:12:16.051 16:27:52 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:16.051 16:27:52 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.051 16:27:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:16.051 16:27:52 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.051 16:27:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:16.051 16:27:52 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.051 16:27:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:16.051 16:27:52 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:16.051 16:27:52 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:16.051 16:27:52 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:16.051 request: 00:12:16.051 { 00:12:16.051 "method": "env_dpdk_get_mem_stats", 00:12:16.051 "req_id": 1 00:12:16.051 } 00:12:16.051 Got JSON-RPC error response 00:12:16.051 response: 00:12:16.051 { 00:12:16.051 "code": -32601, 00:12:16.051 "message": "Method not found" 00:12:16.051 } 00:12:16.051 16:27:52 -- common/autotest_common.sh@643 -- # es=1 00:12:16.051 16:27:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:16.051 16:27:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:16.051 16:27:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:16.051 16:27:52 -- app/cmdline.sh@1 -- # killprocess 110781 00:12:16.051 16:27:52 -- common/autotest_common.sh@926 -- # '[' -z 110781 ']' 00:12:16.051 16:27:52 -- common/autotest_common.sh@930 -- # kill -0 110781 00:12:16.051 16:27:52 -- common/autotest_common.sh@931 -- # uname 00:12:16.051 16:27:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:16.051 16:27:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110781 00:12:16.051 16:27:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:16.051 16:27:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:16.051 killing process with pid 110781 00:12:16.051 16:27:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110781' 00:12:16.051 16:27:52 -- common/autotest_common.sh@945 -- # kill 110781 00:12:16.051 16:27:52 -- common/autotest_common.sh@950 -- # wait 110781 00:12:18.582 ************************************ 00:12:18.582 END TEST app_cmdline 00:12:18.582 ************************************ 00:12:18.582 00:12:18.582 real 0m4.274s 00:12:18.582 user 0m4.870s 00:12:18.582 sys 0m0.536s 00:12:18.582 16:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.582 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:12:18.582 16:27:54 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:18.582 16:27:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:18.582 16:27:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:18.582 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:12:18.582 ************************************ 00:12:18.582 START TEST version 00:12:18.582 ************************************ 00:12:18.582 16:27:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:18.582 * Looking for test storage... 00:12:18.582 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:18.582 16:27:54 -- app/version.sh@17 -- # get_header_version major 00:12:18.582 16:27:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:18.582 16:27:54 -- app/version.sh@14 -- # cut -f2 00:12:18.582 16:27:54 -- app/version.sh@14 -- # tr -d '"' 00:12:18.582 16:27:54 -- app/version.sh@17 -- # major=24 00:12:18.582 16:27:54 -- app/version.sh@18 -- # get_header_version minor 00:12:18.582 16:27:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:18.582 16:27:54 -- app/version.sh@14 -- # cut -f2 00:12:18.582 16:27:54 -- app/version.sh@14 -- # tr -d '"' 00:12:18.582 16:27:54 -- app/version.sh@18 -- # minor=1 00:12:18.582 16:27:54 -- app/version.sh@19 -- # get_header_version patch 00:12:18.582 16:27:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:18.582 16:27:54 -- app/version.sh@14 -- # tr -d '"' 00:12:18.582 16:27:54 -- app/version.sh@14 -- # cut -f2 00:12:18.582 16:27:54 -- app/version.sh@19 -- # patch=1 00:12:18.582 16:27:54 -- app/version.sh@20 -- # get_header_version suffix 00:12:18.582 16:27:54 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:18.582 16:27:54 -- app/version.sh@14 -- # cut -f2 00:12:18.582 16:27:54 -- app/version.sh@14 -- # tr -d '"' 00:12:18.582 16:27:54 -- app/version.sh@20 -- # suffix=-pre 00:12:18.582 16:27:54 -- app/version.sh@22 -- # version=24.1 00:12:18.582 16:27:54 -- app/version.sh@25 -- # (( patch != 0 )) 00:12:18.582 16:27:54 -- app/version.sh@25 -- # version=24.1.1 00:12:18.582 16:27:54 -- app/version.sh@28 -- # version=24.1.1rc0 00:12:18.582 16:27:54 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:18.582 16:27:54 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:18.582 16:27:54 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:12:18.582 16:27:54 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:12:18.582 00:12:18.582 real 0m0.140s 00:12:18.582 user 0m0.104s 00:12:18.582 sys 0m0.068s 00:12:18.582 ************************************ 00:12:18.582 END TEST version 00:12:18.582 ************************************ 00:12:18.582 16:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.582 16:27:54 -- common/autotest_common.sh@10 -- # set +x 00:12:18.582 16:27:55 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:12:18.583 16:27:55 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:18.583 16:27:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:18.583 16:27:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:18.583 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:12:18.583 ************************************ 00:12:18.583 START TEST blockdev_general 00:12:18.583 ************************************ 00:12:18.583 16:27:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:18.583 * Looking for test storage... 00:12:18.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:18.583 16:27:55 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:18.583 16:27:55 -- bdev/nbd_common.sh@6 -- # set -e 00:12:18.583 16:27:55 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:18.583 16:27:55 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:18.583 16:27:55 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:18.583 16:27:55 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:18.583 16:27:55 -- bdev/blockdev.sh@18 -- # : 00:12:18.583 16:27:55 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:18.583 16:27:55 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:18.583 16:27:55 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:18.583 16:27:55 -- bdev/blockdev.sh@672 -- # uname -s 00:12:18.583 16:27:55 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:18.583 16:27:55 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:18.583 16:27:55 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:12:18.583 16:27:55 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:18.583 16:27:55 -- bdev/blockdev.sh@682 -- # dek= 00:12:18.583 16:27:55 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:18.583 16:27:55 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:18.583 16:27:55 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:18.583 16:27:55 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:12:18.583 16:27:55 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:12:18.583 16:27:55 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:18.583 16:27:55 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=110961 00:12:18.583 16:27:55 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:18.583 16:27:55 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:18.583 16:27:55 -- bdev/blockdev.sh@47 -- # waitforlisten 110961 00:12:18.583 16:27:55 -- common/autotest_common.sh@819 -- # '[' -z 110961 ']' 00:12:18.583 16:27:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.583 16:27:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:18.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.583 16:27:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.583 16:27:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:18.583 16:27:55 -- common/autotest_common.sh@10 -- # set +x 00:12:18.583 [2024-07-11 16:27:55.172297] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:18.583 [2024-07-11 16:27:55.172728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110961 ] 00:12:18.583 [2024-07-11 16:27:55.339359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.841 [2024-07-11 16:27:55.536870] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:18.841 [2024-07-11 16:27:55.537256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.408 16:27:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:19.408 16:27:56 -- common/autotest_common.sh@852 -- # return 0 00:12:19.408 16:27:56 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:19.408 16:27:56 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:12:19.408 16:27:56 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:12:19.408 16:27:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.408 16:27:56 -- common/autotest_common.sh@10 -- # set +x 00:12:20.344 [2024-07-11 16:27:56.844192] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:20.344 [2024-07-11 16:27:56.844478] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:20.344 00:12:20.344 [2024-07-11 16:27:56.852166] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:20.344 [2024-07-11 16:27:56.852364] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:20.344 00:12:20.344 Malloc0 00:12:20.344 Malloc1 00:12:20.344 Malloc2 00:12:20.344 Malloc3 00:12:20.344 Malloc4 00:12:20.344 Malloc5 00:12:20.344 Malloc6 00:12:20.344 Malloc7 00:12:20.604 Malloc8 00:12:20.604 Malloc9 00:12:20.604 [2024-07-11 16:27:57.222173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:20.604 [2024-07-11 16:27:57.222404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.604 [2024-07-11 16:27:57.222473] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:20.604 [2024-07-11 16:27:57.222719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.604 [2024-07-11 16:27:57.224987] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.604 [2024-07-11 16:27:57.225207] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:20.604 TestPT 00:12:20.604 16:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.604 16:27:57 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:20.604 5000+0 records in 00:12:20.604 5000+0 records out 00:12:20.604 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0281833 s, 363 MB/s 00:12:20.604 16:27:57 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:20.604 16:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.604 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.604 AIO0 00:12:20.604 16:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.604 16:27:57 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:20.604 16:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.604 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.604 16:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.604 16:27:57 -- bdev/blockdev.sh@738 -- # cat 00:12:20.604 16:27:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:20.604 16:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.604 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.604 16:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.604 16:27:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:20.604 16:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.604 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.604 16:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.604 16:27:57 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:20.604 16:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.604 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.604 16:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.604 16:27:57 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:20.604 16:27:57 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:20.604 16:27:57 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:20.604 16:27:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.604 16:27:57 -- common/autotest_common.sh@10 -- # set +x 00:12:20.864 16:27:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.864 16:27:57 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:20.864 16:27:57 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:20.865 16:27:57 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3b5fd810-8020-4360-87d6-26f55343a79b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3b5fd810-8020-4360-87d6-26f55343a79b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "7dbc8af3-0430-5e80-9afd-ba058e41d14b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "7dbc8af3-0430-5e80-9afd-ba058e41d14b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2d2c27df-2aa8-53bf-af4b-cfbc2a09d477"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2d2c27df-2aa8-53bf-af4b-cfbc2a09d477",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "742fcb8e-21d8-5342-a0f0-fb9db9d8bbff"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "742fcb8e-21d8-5342-a0f0-fb9db9d8bbff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "e26058ad-004d-5e5e-8a9e-e4b9975a79f0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e26058ad-004d-5e5e-8a9e-e4b9975a79f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6e74d207-aef4-5ee4-9a87-397a72c88c58"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6e74d207-aef4-5ee4-9a87-397a72c88c58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "93f00d40-9d63-5451-9c17-6662291cdaf8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "93f00d40-9d63-5451-9c17-6662291cdaf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cf3147ae-fc78-5542-96f3-949c8ae504fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cf3147ae-fc78-5542-96f3-949c8ae504fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e8758ca2-1ea1-5e97-8297-cdb49b1a9859"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e8758ca2-1ea1-5e97-8297-cdb49b1a9859",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "2eaebf22-a080-5a78-8793-6cc3aa51b58e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2eaebf22-a080-5a78-8793-6cc3aa51b58e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "e8736bcd-cb04-52ae-9c79-a6317812a96d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e8736bcd-cb04-52ae-9c79-a6317812a96d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "7d2a97ce-79fe-51d1-9296-10829d770d23"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7d2a97ce-79fe-51d1-9296-10829d770d23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d40ffbd1-ee83-40f4-9b6d-db253f5deec1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d40ffbd1-ee83-40f4-9b6d-db253f5deec1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d40ffbd1-ee83-40f4-9b6d-db253f5deec1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "dd8c9277-a081-4ea4-a309-eca950975e48",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "04ea45e0-5274-4ada-a0e3-1a39ddae58ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "9a63579c-7158-4366-a70d-e13ec90f74db"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9a63579c-7158-4366-a70d-e13ec90f74db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9a63579c-7158-4366-a70d-e13ec90f74db",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6e8b82bd-3b33-4a58-b4e6-7b9023d47713",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "54c000da-a792-4655-b49b-c9a16e5a23d1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b89f1ad2-aa19-407e-8270-f89ca89f00ca"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b89f1ad2-aa19-407e-8270-f89ca89f00ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b89f1ad2-aa19-407e-8270-f89ca89f00ca",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e88acdfe-5e9a-4c33-87d2-d3dc8319cfbd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "4e7e1fff-8fb4-4ffd-9b68-3f2f24154dce",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e8c732c5-2bc0-4d88-ae25-d4b7de9ae5ad"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e8c732c5-2bc0-4d88-ae25-d4b7de9ae5ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:20.865 16:27:57 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:20.865 16:27:57 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:12:20.865 16:27:57 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:20.865 16:27:57 -- bdev/blockdev.sh@752 -- # killprocess 110961 00:12:20.865 16:27:57 -- common/autotest_common.sh@926 -- # '[' -z 110961 ']' 00:12:20.865 16:27:57 -- common/autotest_common.sh@930 -- # kill -0 110961 00:12:20.865 16:27:57 -- common/autotest_common.sh@931 -- # uname 00:12:20.865 16:27:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:20.865 16:27:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 110961 00:12:20.865 16:27:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:20.865 16:27:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:20.865 16:27:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 110961' 00:12:20.865 killing process with pid 110961 00:12:20.865 16:27:57 -- common/autotest_common.sh@945 -- # kill 110961 00:12:20.865 16:27:57 -- common/autotest_common.sh@950 -- # wait 110961 00:12:24.151 16:28:00 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:24.151 16:28:00 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:24.151 16:28:00 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:12:24.151 16:28:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:24.151 16:28:00 -- common/autotest_common.sh@10 -- # set +x 00:12:24.151 ************************************ 00:12:24.151 START TEST bdev_hello_world 00:12:24.151 ************************************ 00:12:24.151 16:28:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:24.151 [2024-07-11 16:28:00.457667] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:24.151 [2024-07-11 16:28:00.458117] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111074 ] 00:12:24.151 [2024-07-11 16:28:00.624320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.151 [2024-07-11 16:28:00.794381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.410 [2024-07-11 16:28:01.137804] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.410 [2024-07-11 16:28:01.138048] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.410 [2024-07-11 16:28:01.145744] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.410 [2024-07-11 16:28:01.145952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.410 [2024-07-11 16:28:01.153764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:24.410 [2024-07-11 16:28:01.153939] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:24.410 [2024-07-11 16:28:01.154084] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:24.669 [2024-07-11 16:28:01.341757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:24.669 [2024-07-11 16:28:01.342093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.669 [2024-07-11 16:28:01.342176] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:24.669 [2024-07-11 16:28:01.342409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.669 [2024-07-11 16:28:01.344668] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.669 [2024-07-11 16:28:01.344871] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:24.927 [2024-07-11 16:28:01.629712] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:24.927 [2024-07-11 16:28:01.630034] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:24.927 [2024-07-11 16:28:01.630134] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:24.927 [2024-07-11 16:28:01.630313] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:24.927 [2024-07-11 16:28:01.630484] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:24.927 [2024-07-11 16:28:01.630608] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:24.927 [2024-07-11 16:28:01.630694] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:24.927 00:12:24.928 [2024-07-11 16:28:01.630865] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:26.830 00:12:26.830 real 0m3.067s 00:12:26.830 user 0m2.577s 00:12:26.830 sys 0m0.329s 00:12:26.830 16:28:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.830 ************************************ 00:12:26.830 END TEST bdev_hello_world 00:12:26.830 ************************************ 00:12:26.830 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:12:26.830 16:28:03 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:26.830 16:28:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:26.830 16:28:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:26.830 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:12:26.830 ************************************ 00:12:26.830 START TEST bdev_bounds 00:12:26.830 ************************************ 00:12:26.830 Process bdevio pid: 111136 00:12:26.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.830 16:28:03 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:12:26.830 16:28:03 -- bdev/blockdev.sh@288 -- # bdevio_pid=111136 00:12:26.830 16:28:03 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:26.830 16:28:03 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 111136' 00:12:26.830 16:28:03 -- bdev/blockdev.sh@291 -- # waitforlisten 111136 00:12:26.830 16:28:03 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:26.830 16:28:03 -- common/autotest_common.sh@819 -- # '[' -z 111136 ']' 00:12:26.830 16:28:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.830 16:28:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:26.830 16:28:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.830 16:28:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:26.830 16:28:03 -- common/autotest_common.sh@10 -- # set +x 00:12:26.830 [2024-07-11 16:28:03.572412] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:26.830 [2024-07-11 16:28:03.572855] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111136 ] 00:12:27.087 [2024-07-11 16:28:03.748191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:27.344 [2024-07-11 16:28:03.917271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.344 [2024-07-11 16:28:03.917460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.344 [2024-07-11 16:28:03.917456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.609 [2024-07-11 16:28:04.327491] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:27.609 [2024-07-11 16:28:04.327630] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:27.609 [2024-07-11 16:28:04.335456] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:27.609 [2024-07-11 16:28:04.335555] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:27.609 [2024-07-11 16:28:04.343518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:27.609 [2024-07-11 16:28:04.343627] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:27.609 [2024-07-11 16:28:04.343667] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:27.871 [2024-07-11 16:28:04.537435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:27.871 [2024-07-11 16:28:04.537603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.871 [2024-07-11 16:28:04.537681] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:27.871 [2024-07-11 16:28:04.537703] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.871 [2024-07-11 16:28:04.540642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.871 [2024-07-11 16:28:04.540705] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:28.435 16:28:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:28.435 16:28:05 -- common/autotest_common.sh@852 -- # return 0 00:12:28.435 16:28:05 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:28.693 I/O targets: 00:12:28.693 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:28.693 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:28.693 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:28.693 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:28.693 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:28.693 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:28.693 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:28.693 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:28.693 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:28.693 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:28.693 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:28.693 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:28.693 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:28.693 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:28.693 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:28.693 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:28.693 00:12:28.693 00:12:28.693 CUnit - A unit testing framework for C - Version 2.1-3 00:12:28.693 http://cunit.sourceforge.net/ 00:12:28.693 00:12:28.693 00:12:28.693 Suite: bdevio tests on: AIO0 00:12:28.693 Test: blockdev write read block ...passed 00:12:28.693 Test: blockdev write zeroes read block ...passed 00:12:28.693 Test: blockdev write zeroes read no split ...passed 00:12:28.693 Test: blockdev write zeroes read split ...passed 00:12:28.693 Test: blockdev write zeroes read split partial ...passed 00:12:28.693 Test: blockdev reset ...passed 00:12:28.693 Test: blockdev write read 8 blocks ...passed 00:12:28.693 Test: blockdev write read size > 128k ...passed 00:12:28.693 Test: blockdev write read invalid size ...passed 00:12:28.693 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:28.693 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:28.693 Test: blockdev write read max offset ...passed 00:12:28.693 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:28.693 Test: blockdev writev readv 8 blocks ...passed 00:12:28.693 Test: blockdev writev readv 30 x 1block ...passed 00:12:28.693 Test: blockdev writev readv block ...passed 00:12:28.693 Test: blockdev writev readv size > 128k ...passed 00:12:28.693 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:28.693 Test: blockdev comparev and writev ...passed 00:12:28.693 Test: blockdev nvme passthru rw ...passed 00:12:28.693 Test: blockdev nvme passthru vendor specific ...passed 00:12:28.693 Test: blockdev nvme admin passthru ...passed 00:12:28.693 Test: blockdev copy ...passed 00:12:28.693 Suite: bdevio tests on: raid1 00:12:28.693 Test: blockdev write read block ...passed 00:12:28.693 Test: blockdev write zeroes read block ...passed 00:12:28.693 Test: blockdev write zeroes read no split ...passed 00:12:28.693 Test: blockdev write zeroes read split ...passed 00:12:28.693 Test: blockdev write zeroes read split partial ...passed 00:12:28.693 Test: blockdev reset ...passed 00:12:28.693 Test: blockdev write read 8 blocks ...passed 00:12:28.693 Test: blockdev write read size > 128k ...passed 00:12:28.693 Test: blockdev write read invalid size ...passed 00:12:28.693 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:28.693 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:28.693 Test: blockdev write read max offset ...passed 00:12:28.693 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:28.693 Test: blockdev writev readv 8 blocks ...passed 00:12:28.693 Test: blockdev writev readv 30 x 1block ...passed 00:12:28.693 Test: blockdev writev readv block ...passed 00:12:28.693 Test: blockdev writev readv size > 128k ...passed 00:12:28.693 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:28.693 Test: blockdev comparev and writev ...passed 00:12:28.693 Test: blockdev nvme passthru rw ...passed 00:12:28.693 Test: blockdev nvme passthru vendor specific ...passed 00:12:28.693 Test: blockdev nvme admin passthru ...passed 00:12:28.693 Test: blockdev copy ...passed 00:12:28.693 Suite: bdevio tests on: concat0 00:12:28.693 Test: blockdev write read block ...passed 00:12:28.693 Test: blockdev write zeroes read block ...passed 00:12:28.693 Test: blockdev write zeroes read no split ...passed 00:12:28.693 Test: blockdev write zeroes read split ...passed 00:12:28.693 Test: blockdev write zeroes read split partial ...passed 00:12:28.693 Test: blockdev reset ...passed 00:12:28.693 Test: blockdev write read 8 blocks ...passed 00:12:28.693 Test: blockdev write read size > 128k ...passed 00:12:28.693 Test: blockdev write read invalid size ...passed 00:12:28.693 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:28.693 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:28.693 Test: blockdev write read max offset ...passed 00:12:28.693 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:28.693 Test: blockdev writev readv 8 blocks ...passed 00:12:28.693 Test: blockdev writev readv 30 x 1block ...passed 00:12:28.693 Test: blockdev writev readv block ...passed 00:12:28.693 Test: blockdev writev readv size > 128k ...passed 00:12:28.693 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:28.693 Test: blockdev comparev and writev ...passed 00:12:28.693 Test: blockdev nvme passthru rw ...passed 00:12:28.693 Test: blockdev nvme passthru vendor specific ...passed 00:12:28.693 Test: blockdev nvme admin passthru ...passed 00:12:28.693 Test: blockdev copy ...passed 00:12:28.693 Suite: bdevio tests on: raid0 00:12:28.693 Test: blockdev write read block ...passed 00:12:28.693 Test: blockdev write zeroes read block ...passed 00:12:28.693 Test: blockdev write zeroes read no split ...passed 00:12:28.693 Test: blockdev write zeroes read split ...passed 00:12:28.693 Test: blockdev write zeroes read split partial ...passed 00:12:28.693 Test: blockdev reset ...passed 00:12:28.693 Test: blockdev write read 8 blocks ...passed 00:12:28.693 Test: blockdev write read size > 128k ...passed 00:12:28.693 Test: blockdev write read invalid size ...passed 00:12:28.693 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:28.693 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:28.693 Test: blockdev write read max offset ...passed 00:12:28.693 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:28.693 Test: blockdev writev readv 8 blocks ...passed 00:12:28.693 Test: blockdev writev readv 30 x 1block ...passed 00:12:28.951 Test: blockdev writev readv block ...passed 00:12:28.951 Test: blockdev writev readv size > 128k ...passed 00:12:28.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:28.951 Test: blockdev comparev and writev ...passed 00:12:28.951 Test: blockdev nvme passthru rw ...passed 00:12:28.951 Test: blockdev nvme passthru vendor specific ...passed 00:12:28.951 Test: blockdev nvme admin passthru ...passed 00:12:28.951 Test: blockdev copy ...passed 00:12:28.951 Suite: bdevio tests on: TestPT 00:12:28.951 Test: blockdev write read block ...passed 00:12:28.951 Test: blockdev write zeroes read block ...passed 00:12:28.952 Test: blockdev write zeroes read no split ...passed 00:12:28.952 Test: blockdev write zeroes read split ...passed 00:12:28.952 Test: blockdev write zeroes read split partial ...passed 00:12:28.952 Test: blockdev reset ...passed 00:12:28.952 Test: blockdev write read 8 blocks ...passed 00:12:28.952 Test: blockdev write read size > 128k ...passed 00:12:28.952 Test: blockdev write read invalid size ...passed 00:12:28.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:28.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:28.952 Test: blockdev write read max offset ...passed 00:12:28.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:28.952 Test: blockdev writev readv 8 blocks ...passed 00:12:28.952 Test: blockdev writev readv 30 x 1block ...passed 00:12:28.952 Test: blockdev writev readv block ...passed 00:12:28.952 Test: blockdev writev readv size > 128k ...passed 00:12:28.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:28.952 Test: blockdev comparev and writev ...passed 00:12:28.952 Test: blockdev nvme passthru rw ...passed 00:12:28.952 Test: blockdev nvme passthru vendor specific ...passed 00:12:28.952 Test: blockdev nvme admin passthru ...passed 00:12:28.952 Test: blockdev copy ...passed 00:12:28.952 Suite: bdevio tests on: Malloc2p7 00:12:28.952 Test: blockdev write read block ...passed 00:12:28.952 Test: blockdev write zeroes read block ...passed 00:12:28.952 Test: blockdev write zeroes read no split ...passed 00:12:28.952 Test: blockdev write zeroes read split ...passed 00:12:28.952 Test: blockdev write zeroes read split partial ...passed 00:12:28.952 Test: blockdev reset ...passed 00:12:28.952 Test: blockdev write read 8 blocks ...passed 00:12:28.952 Test: blockdev write read size > 128k ...passed 00:12:28.952 Test: blockdev write read invalid size ...passed 00:12:28.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:28.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:28.952 Test: blockdev write read max offset ...passed 00:12:28.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:28.952 Test: blockdev writev readv 8 blocks ...passed 00:12:28.952 Test: blockdev writev readv 30 x 1block ...passed 00:12:28.952 Test: blockdev writev readv block ...passed 00:12:28.952 Test: blockdev writev readv size > 128k ...passed 00:12:28.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:28.952 Test: blockdev comparev and writev ...passed 00:12:28.952 Test: blockdev nvme passthru rw ...passed 00:12:28.952 Test: blockdev nvme passthru vendor specific ...passed 00:12:28.952 Test: blockdev nvme admin passthru ...passed 00:12:28.952 Test: blockdev copy ...passed 00:12:28.952 Suite: bdevio tests on: Malloc2p6 00:12:28.952 Test: blockdev write read block ...passed 00:12:28.952 Test: blockdev write zeroes read block ...passed 00:12:28.952 Test: blockdev write zeroes read no split ...passed 00:12:28.952 Test: blockdev write zeroes read split ...passed 00:12:28.952 Test: blockdev write zeroes read split partial ...passed 00:12:28.952 Test: blockdev reset ...passed 00:12:28.952 Test: blockdev write read 8 blocks ...passed 00:12:28.952 Test: blockdev write read size > 128k ...passed 00:12:28.952 Test: blockdev write read invalid size ...passed 00:12:28.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:28.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:28.952 Test: blockdev write read max offset ...passed 00:12:28.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:28.952 Test: blockdev writev readv 8 blocks ...passed 00:12:28.952 Test: blockdev writev readv 30 x 1block ...passed 00:12:28.952 Test: blockdev writev readv block ...passed 00:12:28.952 Test: blockdev writev readv size > 128k ...passed 00:12:28.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:28.952 Test: blockdev comparev and writev ...passed 00:12:28.952 Test: blockdev nvme passthru rw ...passed 00:12:28.952 Test: blockdev nvme passthru vendor specific ...passed 00:12:28.952 Test: blockdev nvme admin passthru ...passed 00:12:28.952 Test: blockdev copy ...passed 00:12:28.952 Suite: bdevio tests on: Malloc2p5 00:12:28.952 Test: blockdev write read block ...passed 00:12:28.952 Test: blockdev write zeroes read block ...passed 00:12:28.952 Test: blockdev write zeroes read no split ...passed 00:12:28.952 Test: blockdev write zeroes read split ...passed 00:12:28.952 Test: blockdev write zeroes read split partial ...passed 00:12:28.952 Test: blockdev reset ...passed 00:12:28.952 Test: blockdev write read 8 blocks ...passed 00:12:28.952 Test: blockdev write read size > 128k ...passed 00:12:28.952 Test: blockdev write read invalid size ...passed 00:12:28.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:28.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:28.952 Test: blockdev write read max offset ...passed 00:12:28.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:28.952 Test: blockdev writev readv 8 blocks ...passed 00:12:28.952 Test: blockdev writev readv 30 x 1block ...passed 00:12:28.952 Test: blockdev writev readv block ...passed 00:12:28.952 Test: blockdev writev readv size > 128k ...passed 00:12:28.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:28.952 Test: blockdev comparev and writev ...passed 00:12:28.952 Test: blockdev nvme passthru rw ...passed 00:12:28.952 Test: blockdev nvme passthru vendor specific ...passed 00:12:28.952 Test: blockdev nvme admin passthru ...passed 00:12:28.952 Test: blockdev copy ...passed 00:12:28.952 Suite: bdevio tests on: Malloc2p4 00:12:28.952 Test: blockdev write read block ...passed 00:12:28.952 Test: blockdev write zeroes read block ...passed 00:12:28.952 Test: blockdev write zeroes read no split ...passed 00:12:28.952 Test: blockdev write zeroes read split ...passed 00:12:29.211 Test: blockdev write zeroes read split partial ...passed 00:12:29.211 Test: blockdev reset ...passed 00:12:29.211 Test: blockdev write read 8 blocks ...passed 00:12:29.211 Test: blockdev write read size > 128k ...passed 00:12:29.211 Test: blockdev write read invalid size ...passed 00:12:29.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.211 Test: blockdev write read max offset ...passed 00:12:29.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.211 Test: blockdev writev readv 8 blocks ...passed 00:12:29.211 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.211 Test: blockdev writev readv block ...passed 00:12:29.211 Test: blockdev writev readv size > 128k ...passed 00:12:29.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.211 Test: blockdev comparev and writev ...passed 00:12:29.211 Test: blockdev nvme passthru rw ...passed 00:12:29.211 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.211 Test: blockdev nvme admin passthru ...passed 00:12:29.211 Test: blockdev copy ...passed 00:12:29.211 Suite: bdevio tests on: Malloc2p3 00:12:29.211 Test: blockdev write read block ...passed 00:12:29.211 Test: blockdev write zeroes read block ...passed 00:12:29.211 Test: blockdev write zeroes read no split ...passed 00:12:29.211 Test: blockdev write zeroes read split ...passed 00:12:29.211 Test: blockdev write zeroes read split partial ...passed 00:12:29.211 Test: blockdev reset ...passed 00:12:29.211 Test: blockdev write read 8 blocks ...passed 00:12:29.211 Test: blockdev write read size > 128k ...passed 00:12:29.211 Test: blockdev write read invalid size ...passed 00:12:29.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.211 Test: blockdev write read max offset ...passed 00:12:29.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.211 Test: blockdev writev readv 8 blocks ...passed 00:12:29.211 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.211 Test: blockdev writev readv block ...passed 00:12:29.211 Test: blockdev writev readv size > 128k ...passed 00:12:29.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.211 Test: blockdev comparev and writev ...passed 00:12:29.211 Test: blockdev nvme passthru rw ...passed 00:12:29.211 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.211 Test: blockdev nvme admin passthru ...passed 00:12:29.211 Test: blockdev copy ...passed 00:12:29.211 Suite: bdevio tests on: Malloc2p2 00:12:29.211 Test: blockdev write read block ...passed 00:12:29.211 Test: blockdev write zeroes read block ...passed 00:12:29.211 Test: blockdev write zeroes read no split ...passed 00:12:29.211 Test: blockdev write zeroes read split ...passed 00:12:29.211 Test: blockdev write zeroes read split partial ...passed 00:12:29.211 Test: blockdev reset ...passed 00:12:29.211 Test: blockdev write read 8 blocks ...passed 00:12:29.211 Test: blockdev write read size > 128k ...passed 00:12:29.211 Test: blockdev write read invalid size ...passed 00:12:29.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.211 Test: blockdev write read max offset ...passed 00:12:29.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.211 Test: blockdev writev readv 8 blocks ...passed 00:12:29.211 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.211 Test: blockdev writev readv block ...passed 00:12:29.211 Test: blockdev writev readv size > 128k ...passed 00:12:29.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.211 Test: blockdev comparev and writev ...passed 00:12:29.211 Test: blockdev nvme passthru rw ...passed 00:12:29.211 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.211 Test: blockdev nvme admin passthru ...passed 00:12:29.211 Test: blockdev copy ...passed 00:12:29.211 Suite: bdevio tests on: Malloc2p1 00:12:29.211 Test: blockdev write read block ...passed 00:12:29.211 Test: blockdev write zeroes read block ...passed 00:12:29.211 Test: blockdev write zeroes read no split ...passed 00:12:29.211 Test: blockdev write zeroes read split ...passed 00:12:29.211 Test: blockdev write zeroes read split partial ...passed 00:12:29.211 Test: blockdev reset ...passed 00:12:29.211 Test: blockdev write read 8 blocks ...passed 00:12:29.211 Test: blockdev write read size > 128k ...passed 00:12:29.211 Test: blockdev write read invalid size ...passed 00:12:29.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.211 Test: blockdev write read max offset ...passed 00:12:29.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.211 Test: blockdev writev readv 8 blocks ...passed 00:12:29.211 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.211 Test: blockdev writev readv block ...passed 00:12:29.211 Test: blockdev writev readv size > 128k ...passed 00:12:29.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.211 Test: blockdev comparev and writev ...passed 00:12:29.211 Test: blockdev nvme passthru rw ...passed 00:12:29.211 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.211 Test: blockdev nvme admin passthru ...passed 00:12:29.211 Test: blockdev copy ...passed 00:12:29.211 Suite: bdevio tests on: Malloc2p0 00:12:29.211 Test: blockdev write read block ...passed 00:12:29.211 Test: blockdev write zeroes read block ...passed 00:12:29.211 Test: blockdev write zeroes read no split ...passed 00:12:29.211 Test: blockdev write zeroes read split ...passed 00:12:29.211 Test: blockdev write zeroes read split partial ...passed 00:12:29.211 Test: blockdev reset ...passed 00:12:29.211 Test: blockdev write read 8 blocks ...passed 00:12:29.211 Test: blockdev write read size > 128k ...passed 00:12:29.211 Test: blockdev write read invalid size ...passed 00:12:29.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.211 Test: blockdev write read max offset ...passed 00:12:29.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.211 Test: blockdev writev readv 8 blocks ...passed 00:12:29.211 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.211 Test: blockdev writev readv block ...passed 00:12:29.211 Test: blockdev writev readv size > 128k ...passed 00:12:29.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.211 Test: blockdev comparev and writev ...passed 00:12:29.211 Test: blockdev nvme passthru rw ...passed 00:12:29.211 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.211 Test: blockdev nvme admin passthru ...passed 00:12:29.211 Test: blockdev copy ...passed 00:12:29.211 Suite: bdevio tests on: Malloc1p1 00:12:29.211 Test: blockdev write read block ...passed 00:12:29.211 Test: blockdev write zeroes read block ...passed 00:12:29.211 Test: blockdev write zeroes read no split ...passed 00:12:29.211 Test: blockdev write zeroes read split ...passed 00:12:29.211 Test: blockdev write zeroes read split partial ...passed 00:12:29.211 Test: blockdev reset ...passed 00:12:29.211 Test: blockdev write read 8 blocks ...passed 00:12:29.211 Test: blockdev write read size > 128k ...passed 00:12:29.211 Test: blockdev write read invalid size ...passed 00:12:29.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.211 Test: blockdev write read max offset ...passed 00:12:29.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.211 Test: blockdev writev readv 8 blocks ...passed 00:12:29.211 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.211 Test: blockdev writev readv block ...passed 00:12:29.211 Test: blockdev writev readv size > 128k ...passed 00:12:29.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.211 Test: blockdev comparev and writev ...passed 00:12:29.211 Test: blockdev nvme passthru rw ...passed 00:12:29.211 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.211 Test: blockdev nvme admin passthru ...passed 00:12:29.211 Test: blockdev copy ...passed 00:12:29.211 Suite: bdevio tests on: Malloc1p0 00:12:29.211 Test: blockdev write read block ...passed 00:12:29.211 Test: blockdev write zeroes read block ...passed 00:12:29.211 Test: blockdev write zeroes read no split ...passed 00:12:29.469 Test: blockdev write zeroes read split ...passed 00:12:29.469 Test: blockdev write zeroes read split partial ...passed 00:12:29.469 Test: blockdev reset ...passed 00:12:29.469 Test: blockdev write read 8 blocks ...passed 00:12:29.469 Test: blockdev write read size > 128k ...passed 00:12:29.469 Test: blockdev write read invalid size ...passed 00:12:29.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.469 Test: blockdev write read max offset ...passed 00:12:29.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.469 Test: blockdev writev readv 8 blocks ...passed 00:12:29.469 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.469 Test: blockdev writev readv block ...passed 00:12:29.469 Test: blockdev writev readv size > 128k ...passed 00:12:29.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.469 Test: blockdev comparev and writev ...passed 00:12:29.469 Test: blockdev nvme passthru rw ...passed 00:12:29.469 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.469 Test: blockdev nvme admin passthru ...passed 00:12:29.469 Test: blockdev copy ...passed 00:12:29.469 Suite: bdevio tests on: Malloc0 00:12:29.469 Test: blockdev write read block ...passed 00:12:29.469 Test: blockdev write zeroes read block ...passed 00:12:29.469 Test: blockdev write zeroes read no split ...passed 00:12:29.469 Test: blockdev write zeroes read split ...passed 00:12:29.469 Test: blockdev write zeroes read split partial ...passed 00:12:29.469 Test: blockdev reset ...passed 00:12:29.469 Test: blockdev write read 8 blocks ...passed 00:12:29.469 Test: blockdev write read size > 128k ...passed 00:12:29.469 Test: blockdev write read invalid size ...passed 00:12:29.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:29.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:29.470 Test: blockdev write read max offset ...passed 00:12:29.470 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:29.470 Test: blockdev writev readv 8 blocks ...passed 00:12:29.470 Test: blockdev writev readv 30 x 1block ...passed 00:12:29.470 Test: blockdev writev readv block ...passed 00:12:29.470 Test: blockdev writev readv size > 128k ...passed 00:12:29.470 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:29.470 Test: blockdev comparev and writev ...passed 00:12:29.470 Test: blockdev nvme passthru rw ...passed 00:12:29.470 Test: blockdev nvme passthru vendor specific ...passed 00:12:29.470 Test: blockdev nvme admin passthru ...passed 00:12:29.470 Test: blockdev copy ...passed 00:12:29.470 00:12:29.470 Run Summary: Type Total Ran Passed Failed Inactive 00:12:29.470 suites 16 16 n/a 0 0 00:12:29.470 tests 368 368 368 0 0 00:12:29.470 asserts 2224 2224 2224 0 n/a 00:12:29.470 00:12:29.470 Elapsed time = 2.343 seconds 00:12:29.470 0 00:12:29.470 16:28:06 -- bdev/blockdev.sh@293 -- # killprocess 111136 00:12:29.470 16:28:06 -- common/autotest_common.sh@926 -- # '[' -z 111136 ']' 00:12:29.470 16:28:06 -- common/autotest_common.sh@930 -- # kill -0 111136 00:12:29.470 16:28:06 -- common/autotest_common.sh@931 -- # uname 00:12:29.470 16:28:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:29.470 16:28:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111136 00:12:29.470 16:28:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:29.470 killing process with pid 111136 00:12:29.470 16:28:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:29.470 16:28:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111136' 00:12:29.470 16:28:06 -- common/autotest_common.sh@945 -- # kill 111136 00:12:29.470 16:28:06 -- common/autotest_common.sh@950 -- # wait 111136 00:12:30.842 16:28:07 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:30.842 00:12:30.842 real 0m4.113s 00:12:30.842 user 0m10.722s 00:12:30.842 sys 0m0.539s 00:12:30.842 ************************************ 00:12:30.842 END TEST bdev_bounds 00:12:30.842 ************************************ 00:12:30.842 16:28:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.842 16:28:07 -- common/autotest_common.sh@10 -- # set +x 00:12:31.100 16:28:07 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:31.100 16:28:07 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:31.100 16:28:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:31.100 16:28:07 -- common/autotest_common.sh@10 -- # set +x 00:12:31.100 ************************************ 00:12:31.100 START TEST bdev_nbd 00:12:31.100 ************************************ 00:12:31.100 16:28:07 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:31.100 16:28:07 -- bdev/blockdev.sh@298 -- # uname -s 00:12:31.100 16:28:07 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:31.100 16:28:07 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.100 16:28:07 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:31.100 16:28:07 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:12:31.100 16:28:07 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:31.100 16:28:07 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:12:31.100 16:28:07 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:31.100 16:28:07 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:12:31.100 16:28:07 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:31.100 16:28:07 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:12:31.100 16:28:07 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:12:31.100 16:28:07 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:31.100 16:28:07 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:12:31.100 16:28:07 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:31.100 16:28:07 -- bdev/blockdev.sh@316 -- # nbd_pid=111220 00:12:31.100 16:28:07 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:31.100 16:28:07 -- bdev/blockdev.sh@318 -- # waitforlisten 111220 /var/tmp/spdk-nbd.sock 00:12:31.100 16:28:07 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:31.100 16:28:07 -- common/autotest_common.sh@819 -- # '[' -z 111220 ']' 00:12:31.100 16:28:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:31.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:31.100 16:28:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:31.100 16:28:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:31.100 16:28:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:31.100 16:28:07 -- common/autotest_common.sh@10 -- # set +x 00:12:31.100 [2024-07-11 16:28:07.724536] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:31.100 [2024-07-11 16:28:07.724682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.100 [2024-07-11 16:28:07.877569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.357 [2024-07-11 16:28:08.034661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.616 [2024-07-11 16:28:08.371725] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:31.616 [2024-07-11 16:28:08.371821] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:31.616 [2024-07-11 16:28:08.379681] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:31.616 [2024-07-11 16:28:08.379762] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:31.616 [2024-07-11 16:28:08.387690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:31.616 [2024-07-11 16:28:08.387748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:31.616 [2024-07-11 16:28:08.387775] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:31.874 [2024-07-11 16:28:08.574235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:31.874 [2024-07-11 16:28:08.574365] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.874 [2024-07-11 16:28:08.574443] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:31.874 [2024-07-11 16:28:08.574468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.874 [2024-07-11 16:28:08.577123] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.874 [2024-07-11 16:28:08.577202] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:32.808 16:28:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:32.808 16:28:09 -- common/autotest_common.sh@852 -- # return 0 00:12:32.808 16:28:09 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@24 -- # local i 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:32.808 16:28:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:32.808 16:28:09 -- common/autotest_common.sh@857 -- # local i 00:12:32.808 16:28:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:32.808 16:28:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:32.808 16:28:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:32.808 16:28:09 -- common/autotest_common.sh@861 -- # break 00:12:32.808 16:28:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:32.808 16:28:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:32.808 16:28:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:32.808 1+0 records in 00:12:32.808 1+0 records out 00:12:32.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391479 s, 10.5 MB/s 00:12:32.808 16:28:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.808 16:28:09 -- common/autotest_common.sh@874 -- # size=4096 00:12:32.808 16:28:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:32.808 16:28:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:32.808 16:28:09 -- common/autotest_common.sh@877 -- # return 0 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:32.808 16:28:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:33.066 16:28:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:33.066 16:28:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:33.066 16:28:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:33.066 16:28:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:33.066 16:28:09 -- common/autotest_common.sh@857 -- # local i 00:12:33.066 16:28:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:33.066 16:28:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:33.066 16:28:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:33.066 16:28:09 -- common/autotest_common.sh@861 -- # break 00:12:33.066 16:28:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:33.066 16:28:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:33.066 16:28:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.066 1+0 records in 00:12:33.066 1+0 records out 00:12:33.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129633 s, 3.2 MB/s 00:12:33.066 16:28:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.066 16:28:09 -- common/autotest_common.sh@874 -- # size=4096 00:12:33.066 16:28:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.066 16:28:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:33.066 16:28:09 -- common/autotest_common.sh@877 -- # return 0 00:12:33.066 16:28:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:33.066 16:28:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:33.066 16:28:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:33.324 16:28:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:33.324 16:28:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:33.324 16:28:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:33.324 16:28:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:33.324 16:28:10 -- common/autotest_common.sh@857 -- # local i 00:12:33.324 16:28:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:33.324 16:28:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:33.324 16:28:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:33.324 16:28:10 -- common/autotest_common.sh@861 -- # break 00:12:33.324 16:28:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:33.324 16:28:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:33.324 16:28:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.324 1+0 records in 00:12:33.324 1+0 records out 00:12:33.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296466 s, 13.8 MB/s 00:12:33.324 16:28:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.324 16:28:10 -- common/autotest_common.sh@874 -- # size=4096 00:12:33.324 16:28:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.324 16:28:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:33.324 16:28:10 -- common/autotest_common.sh@877 -- # return 0 00:12:33.324 16:28:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:33.324 16:28:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:33.324 16:28:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:33.582 16:28:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:33.582 16:28:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:33.582 16:28:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:33.582 16:28:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:33.582 16:28:10 -- common/autotest_common.sh@857 -- # local i 00:12:33.582 16:28:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:33.582 16:28:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:33.582 16:28:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:33.582 16:28:10 -- common/autotest_common.sh@861 -- # break 00:12:33.582 16:28:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:33.582 16:28:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:33.582 16:28:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.582 1+0 records in 00:12:33.582 1+0 records out 00:12:33.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359284 s, 11.4 MB/s 00:12:33.582 16:28:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.582 16:28:10 -- common/autotest_common.sh@874 -- # size=4096 00:12:33.582 16:28:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:33.582 16:28:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:33.840 16:28:10 -- common/autotest_common.sh@877 -- # return 0 00:12:33.840 16:28:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:33.840 16:28:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:33.840 16:28:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:33.840 16:28:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:33.840 16:28:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:33.840 16:28:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:33.840 16:28:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:33.840 16:28:10 -- common/autotest_common.sh@857 -- # local i 00:12:33.840 16:28:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:33.840 16:28:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:33.840 16:28:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:33.840 16:28:10 -- common/autotest_common.sh@861 -- # break 00:12:33.840 16:28:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:33.840 16:28:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:33.840 16:28:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:33.840 1+0 records in 00:12:33.840 1+0 records out 00:12:33.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449439 s, 9.1 MB/s 00:12:34.098 16:28:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.098 16:28:10 -- common/autotest_common.sh@874 -- # size=4096 00:12:34.098 16:28:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.098 16:28:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:34.098 16:28:10 -- common/autotest_common.sh@877 -- # return 0 00:12:34.098 16:28:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:34.098 16:28:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:34.098 16:28:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:34.098 16:28:10 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:34.098 16:28:10 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:34.098 16:28:10 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:34.098 16:28:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:34.098 16:28:10 -- common/autotest_common.sh@857 -- # local i 00:12:34.098 16:28:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:34.098 16:28:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:34.098 16:28:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:34.098 16:28:10 -- common/autotest_common.sh@861 -- # break 00:12:34.098 16:28:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:34.098 16:28:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:34.098 16:28:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.098 1+0 records in 00:12:34.098 1+0 records out 00:12:34.098 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452676 s, 9.0 MB/s 00:12:34.098 16:28:10 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.098 16:28:10 -- common/autotest_common.sh@874 -- # size=4096 00:12:34.098 16:28:10 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.098 16:28:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:34.098 16:28:10 -- common/autotest_common.sh@877 -- # return 0 00:12:34.098 16:28:10 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:34.098 16:28:10 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:34.098 16:28:10 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:34.355 16:28:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:34.355 16:28:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:34.612 16:28:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:34.612 16:28:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:34.612 16:28:11 -- common/autotest_common.sh@857 -- # local i 00:12:34.612 16:28:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:34.612 16:28:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:34.612 16:28:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:34.612 16:28:11 -- common/autotest_common.sh@861 -- # break 00:12:34.612 16:28:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:34.612 16:28:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:34.612 16:28:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.612 1+0 records in 00:12:34.612 1+0 records out 00:12:34.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470997 s, 8.7 MB/s 00:12:34.612 16:28:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.612 16:28:11 -- common/autotest_common.sh@874 -- # size=4096 00:12:34.612 16:28:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.612 16:28:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:34.612 16:28:11 -- common/autotest_common.sh@877 -- # return 0 00:12:34.612 16:28:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:34.612 16:28:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:34.612 16:28:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:34.869 16:28:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:34.869 16:28:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:34.869 16:28:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:34.869 16:28:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:34.869 16:28:11 -- common/autotest_common.sh@857 -- # local i 00:12:34.869 16:28:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:34.869 16:28:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:34.869 16:28:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:34.869 16:28:11 -- common/autotest_common.sh@861 -- # break 00:12:34.869 16:28:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:34.869 16:28:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:34.869 16:28:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:34.869 1+0 records in 00:12:34.869 1+0 records out 00:12:34.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473217 s, 8.7 MB/s 00:12:34.869 16:28:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.869 16:28:11 -- common/autotest_common.sh@874 -- # size=4096 00:12:34.869 16:28:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:34.869 16:28:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:34.869 16:28:11 -- common/autotest_common.sh@877 -- # return 0 00:12:34.869 16:28:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:34.869 16:28:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:34.869 16:28:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:35.127 16:28:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:35.127 16:28:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:35.127 16:28:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:35.127 16:28:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:35.127 16:28:11 -- common/autotest_common.sh@857 -- # local i 00:12:35.127 16:28:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:35.127 16:28:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:35.127 16:28:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:35.127 16:28:11 -- common/autotest_common.sh@861 -- # break 00:12:35.127 16:28:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:35.127 16:28:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:35.127 16:28:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.127 1+0 records in 00:12:35.127 1+0 records out 00:12:35.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621876 s, 6.6 MB/s 00:12:35.127 16:28:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.127 16:28:11 -- common/autotest_common.sh@874 -- # size=4096 00:12:35.127 16:28:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.127 16:28:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:35.127 16:28:11 -- common/autotest_common.sh@877 -- # return 0 00:12:35.127 16:28:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:35.127 16:28:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:35.127 16:28:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:35.385 16:28:11 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:35.385 16:28:11 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:35.385 16:28:11 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:35.385 16:28:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:35.385 16:28:11 -- common/autotest_common.sh@857 -- # local i 00:12:35.385 16:28:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:35.385 16:28:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:35.385 16:28:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:35.385 16:28:11 -- common/autotest_common.sh@861 -- # break 00:12:35.385 16:28:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:35.385 16:28:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:35.385 16:28:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.385 1+0 records in 00:12:35.385 1+0 records out 00:12:35.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612159 s, 6.7 MB/s 00:12:35.385 16:28:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.385 16:28:11 -- common/autotest_common.sh@874 -- # size=4096 00:12:35.385 16:28:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.385 16:28:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:35.385 16:28:11 -- common/autotest_common.sh@877 -- # return 0 00:12:35.385 16:28:11 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:35.385 16:28:11 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:35.385 16:28:11 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:35.643 16:28:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:35.644 16:28:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:35.644 16:28:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:35.644 16:28:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:35.644 16:28:12 -- common/autotest_common.sh@857 -- # local i 00:12:35.644 16:28:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:35.644 16:28:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:35.644 16:28:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:35.644 16:28:12 -- common/autotest_common.sh@861 -- # break 00:12:35.644 16:28:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:35.644 16:28:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:35.644 16:28:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.644 1+0 records in 00:12:35.644 1+0 records out 00:12:35.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629055 s, 6.5 MB/s 00:12:35.644 16:28:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.644 16:28:12 -- common/autotest_common.sh@874 -- # size=4096 00:12:35.644 16:28:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.644 16:28:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:35.644 16:28:12 -- common/autotest_common.sh@877 -- # return 0 00:12:35.644 16:28:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:35.644 16:28:12 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:35.644 16:28:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:35.903 16:28:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:35.903 16:28:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:35.903 16:28:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:35.903 16:28:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:35.903 16:28:12 -- common/autotest_common.sh@857 -- # local i 00:12:35.903 16:28:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:35.903 16:28:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:35.903 16:28:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:35.903 16:28:12 -- common/autotest_common.sh@861 -- # break 00:12:35.903 16:28:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:35.903 16:28:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:35.903 16:28:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:35.903 1+0 records in 00:12:35.903 1+0 records out 00:12:35.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710559 s, 5.8 MB/s 00:12:35.903 16:28:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.903 16:28:12 -- common/autotest_common.sh@874 -- # size=4096 00:12:35.903 16:28:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:35.903 16:28:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:35.903 16:28:12 -- common/autotest_common.sh@877 -- # return 0 00:12:35.903 16:28:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:35.903 16:28:12 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:35.903 16:28:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:36.199 16:28:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:36.199 16:28:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:36.199 16:28:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:36.199 16:28:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:36.199 16:28:12 -- common/autotest_common.sh@857 -- # local i 00:12:36.199 16:28:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:36.199 16:28:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:36.199 16:28:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:36.199 16:28:12 -- common/autotest_common.sh@861 -- # break 00:12:36.199 16:28:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:36.199 16:28:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:36.199 16:28:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.199 1+0 records in 00:12:36.199 1+0 records out 00:12:36.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011265 s, 3.6 MB/s 00:12:36.199 16:28:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.199 16:28:12 -- common/autotest_common.sh@874 -- # size=4096 00:12:36.199 16:28:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.199 16:28:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:36.199 16:28:12 -- common/autotest_common.sh@877 -- # return 0 00:12:36.199 16:28:12 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:36.199 16:28:12 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:36.199 16:28:12 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:36.199 16:28:12 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:36.199 16:28:12 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:36.199 16:28:12 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:36.199 16:28:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:36.199 16:28:12 -- common/autotest_common.sh@857 -- # local i 00:12:36.199 16:28:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:36.199 16:28:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:36.199 16:28:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:36.199 16:28:12 -- common/autotest_common.sh@861 -- # break 00:12:36.199 16:28:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:36.199 16:28:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:36.199 16:28:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.199 1+0 records in 00:12:36.199 1+0 records out 00:12:36.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000923835 s, 4.4 MB/s 00:12:36.199 16:28:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.457 16:28:13 -- common/autotest_common.sh@874 -- # size=4096 00:12:36.457 16:28:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.457 16:28:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:36.457 16:28:13 -- common/autotest_common.sh@877 -- # return 0 00:12:36.457 16:28:13 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:36.457 16:28:13 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:36.457 16:28:13 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:36.457 16:28:13 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:36.457 16:28:13 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:36.457 16:28:13 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:36.457 16:28:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:36.457 16:28:13 -- common/autotest_common.sh@857 -- # local i 00:12:36.457 16:28:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:36.458 16:28:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:36.458 16:28:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:36.458 16:28:13 -- common/autotest_common.sh@861 -- # break 00:12:36.458 16:28:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:36.458 16:28:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:36.458 16:28:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.458 1+0 records in 00:12:36.458 1+0 records out 00:12:36.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722322 s, 5.7 MB/s 00:12:36.458 16:28:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.458 16:28:13 -- common/autotest_common.sh@874 -- # size=4096 00:12:36.458 16:28:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.458 16:28:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:36.458 16:28:13 -- common/autotest_common.sh@877 -- # return 0 00:12:36.458 16:28:13 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:36.458 16:28:13 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:36.458 16:28:13 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:36.715 16:28:13 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:36.715 16:28:13 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:36.715 16:28:13 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:36.715 16:28:13 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:36.715 16:28:13 -- common/autotest_common.sh@857 -- # local i 00:12:36.715 16:28:13 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:36.715 16:28:13 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:36.715 16:28:13 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:36.715 16:28:13 -- common/autotest_common.sh@861 -- # break 00:12:36.715 16:28:13 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:36.715 16:28:13 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:36.715 16:28:13 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.715 1+0 records in 00:12:36.715 1+0 records out 00:12:36.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00167591 s, 2.4 MB/s 00:12:36.715 16:28:13 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.715 16:28:13 -- common/autotest_common.sh@874 -- # size=4096 00:12:36.715 16:28:13 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.715 16:28:13 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:36.715 16:28:13 -- common/autotest_common.sh@877 -- # return 0 00:12:36.715 16:28:13 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:36.715 16:28:13 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:36.715 16:28:13 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.973 16:28:13 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:36.973 { 00:12:36.973 "nbd_device": "/dev/nbd0", 00:12:36.973 "bdev_name": "Malloc0" 00:12:36.973 }, 00:12:36.973 { 00:12:36.973 "nbd_device": "/dev/nbd1", 00:12:36.973 "bdev_name": "Malloc1p0" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd2", 00:12:36.974 "bdev_name": "Malloc1p1" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd3", 00:12:36.974 "bdev_name": "Malloc2p0" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd4", 00:12:36.974 "bdev_name": "Malloc2p1" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd5", 00:12:36.974 "bdev_name": "Malloc2p2" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd6", 00:12:36.974 "bdev_name": "Malloc2p3" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd7", 00:12:36.974 "bdev_name": "Malloc2p4" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd8", 00:12:36.974 "bdev_name": "Malloc2p5" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd9", 00:12:36.974 "bdev_name": "Malloc2p6" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd10", 00:12:36.974 "bdev_name": "Malloc2p7" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd11", 00:12:36.974 "bdev_name": "TestPT" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd12", 00:12:36.974 "bdev_name": "raid0" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd13", 00:12:36.974 "bdev_name": "concat0" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd14", 00:12:36.974 "bdev_name": "raid1" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd15", 00:12:36.974 "bdev_name": "AIO0" 00:12:36.974 } 00:12:36.974 ]' 00:12:36.974 16:28:13 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:36.974 16:28:13 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:36.974 16:28:13 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd0", 00:12:36.974 "bdev_name": "Malloc0" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd1", 00:12:36.974 "bdev_name": "Malloc1p0" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd2", 00:12:36.974 "bdev_name": "Malloc1p1" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd3", 00:12:36.974 "bdev_name": "Malloc2p0" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd4", 00:12:36.974 "bdev_name": "Malloc2p1" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd5", 00:12:36.974 "bdev_name": "Malloc2p2" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd6", 00:12:36.974 "bdev_name": "Malloc2p3" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd7", 00:12:36.974 "bdev_name": "Malloc2p4" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd8", 00:12:36.974 "bdev_name": "Malloc2p5" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd9", 00:12:36.974 "bdev_name": "Malloc2p6" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd10", 00:12:36.974 "bdev_name": "Malloc2p7" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd11", 00:12:36.974 "bdev_name": "TestPT" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd12", 00:12:36.974 "bdev_name": "raid0" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd13", 00:12:36.974 "bdev_name": "concat0" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd14", 00:12:36.974 "bdev_name": "raid1" 00:12:36.974 }, 00:12:36.974 { 00:12:36.974 "nbd_device": "/dev/nbd15", 00:12:36.974 "bdev_name": "AIO0" 00:12:36.974 } 00:12:36.974 ]' 00:12:36.974 16:28:13 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:36.974 16:28:13 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.974 16:28:13 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:36.974 16:28:13 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:36.974 16:28:13 -- bdev/nbd_common.sh@51 -- # local i 00:12:36.974 16:28:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.974 16:28:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:37.232 16:28:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.232 16:28:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.232 16:28:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.232 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.232 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.232 16:28:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.232 16:28:14 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:37.490 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:37.490 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.490 16:28:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.490 16:28:14 -- bdev/nbd_common.sh@41 -- # break 00:12:37.490 16:28:14 -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.490 16:28:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.490 16:28:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@41 -- # break 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.749 16:28:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:38.007 16:28:14 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:38.007 16:28:14 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:38.007 16:28:14 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:38.007 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.007 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.007 16:28:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:38.007 16:28:14 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:38.265 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:38.265 16:28:14 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.265 16:28:14 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:38.265 16:28:14 -- bdev/nbd_common.sh@41 -- # break 00:12:38.265 16:28:14 -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.265 16:28:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.265 16:28:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@41 -- # break 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.524 16:28:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@41 -- # break 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@45 -- # return 0 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:38.782 16:28:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:39.041 16:28:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:39.041 16:28:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:39.041 16:28:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:39.041 16:28:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.041 16:28:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.041 16:28:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:39.041 16:28:15 -- bdev/nbd_common.sh@41 -- # break 00:12:39.041 16:28:15 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.041 16:28:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.041 16:28:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:39.299 16:28:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:39.299 16:28:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:39.299 16:28:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:39.299 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.299 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.299 16:28:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:39.299 16:28:16 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@41 -- # break 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:39.558 16:28:16 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@41 -- # break 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:39.817 16:28:16 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:40.076 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:40.076 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.076 16:28:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:40.076 16:28:16 -- bdev/nbd_common.sh@41 -- # break 00:12:40.076 16:28:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.076 16:28:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:40.076 16:28:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:40.334 16:28:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:40.334 16:28:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:40.334 16:28:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:40.334 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.334 16:28:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.334 16:28:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:40.334 16:28:16 -- bdev/nbd_common.sh@41 -- # break 00:12:40.334 16:28:16 -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.334 16:28:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:40.334 16:28:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:40.593 16:28:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:40.593 16:28:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:40.593 16:28:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:40.593 16:28:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.593 16:28:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.593 16:28:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:40.593 16:28:17 -- bdev/nbd_common.sh@41 -- # break 00:12:40.593 16:28:17 -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.593 16:28:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:40.593 16:28:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@41 -- # break 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:40.851 16:28:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:41.109 16:28:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:41.109 16:28:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:41.109 16:28:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:41.109 16:28:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.109 16:28:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.109 16:28:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:41.109 16:28:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:41.367 16:28:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:41.367 16:28:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.367 16:28:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:41.367 16:28:17 -- bdev/nbd_common.sh@41 -- # break 00:12:41.367 16:28:17 -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.367 16:28:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.367 16:28:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:41.367 16:28:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:41.368 16:28:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:41.368 16:28:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:41.368 16:28:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.368 16:28:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.368 16:28:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:41.368 16:28:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:41.626 16:28:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:41.626 16:28:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.626 16:28:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:41.626 16:28:18 -- bdev/nbd_common.sh@41 -- # break 00:12:41.626 16:28:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.626 16:28:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.626 16:28:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@41 -- # break 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@41 -- # break 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.884 16:28:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@65 -- # true 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@65 -- # count=0 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@122 -- # count=0 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@127 -- # return 0 00:12:42.142 16:28:18 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@12 -- # local i 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:42.142 16:28:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:42.400 /dev/nbd0 00:12:42.658 16:28:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:42.658 16:28:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:42.658 16:28:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:12:42.658 16:28:19 -- common/autotest_common.sh@857 -- # local i 00:12:42.658 16:28:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:42.658 16:28:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:42.658 16:28:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:12:42.658 16:28:19 -- common/autotest_common.sh@861 -- # break 00:12:42.658 16:28:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:42.658 16:28:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:42.658 16:28:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.658 1+0 records in 00:12:42.658 1+0 records out 00:12:42.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000454182 s, 9.0 MB/s 00:12:42.658 16:28:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.658 16:28:19 -- common/autotest_common.sh@874 -- # size=4096 00:12:42.658 16:28:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.658 16:28:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:42.658 16:28:19 -- common/autotest_common.sh@877 -- # return 0 00:12:42.658 16:28:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.658 16:28:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:42.658 16:28:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:42.917 /dev/nbd1 00:12:42.917 16:28:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:42.917 16:28:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:42.917 16:28:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:12:42.917 16:28:19 -- common/autotest_common.sh@857 -- # local i 00:12:42.917 16:28:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:42.917 16:28:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:42.917 16:28:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:12:42.917 16:28:19 -- common/autotest_common.sh@861 -- # break 00:12:42.917 16:28:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:42.917 16:28:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:42.917 16:28:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.917 1+0 records in 00:12:42.917 1+0 records out 00:12:42.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414753 s, 9.9 MB/s 00:12:42.917 16:28:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.917 16:28:19 -- common/autotest_common.sh@874 -- # size=4096 00:12:42.917 16:28:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.917 16:28:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:42.917 16:28:19 -- common/autotest_common.sh@877 -- # return 0 00:12:42.917 16:28:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.917 16:28:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:42.917 16:28:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:43.176 /dev/nbd10 00:12:43.176 16:28:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:43.176 16:28:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:43.176 16:28:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:12:43.176 16:28:19 -- common/autotest_common.sh@857 -- # local i 00:12:43.176 16:28:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:43.176 16:28:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:43.176 16:28:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:12:43.176 16:28:19 -- common/autotest_common.sh@861 -- # break 00:12:43.176 16:28:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:43.176 16:28:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:43.176 16:28:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.176 1+0 records in 00:12:43.176 1+0 records out 00:12:43.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540437 s, 7.6 MB/s 00:12:43.176 16:28:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.176 16:28:19 -- common/autotest_common.sh@874 -- # size=4096 00:12:43.176 16:28:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.176 16:28:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:43.176 16:28:19 -- common/autotest_common.sh@877 -- # return 0 00:12:43.176 16:28:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.176 16:28:19 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:43.176 16:28:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:43.435 /dev/nbd11 00:12:43.435 16:28:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:43.435 16:28:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:43.435 16:28:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:12:43.435 16:28:20 -- common/autotest_common.sh@857 -- # local i 00:12:43.435 16:28:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:43.435 16:28:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:43.435 16:28:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:12:43.435 16:28:20 -- common/autotest_common.sh@861 -- # break 00:12:43.435 16:28:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:43.435 16:28:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:43.435 16:28:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.435 1+0 records in 00:12:43.435 1+0 records out 00:12:43.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650973 s, 6.3 MB/s 00:12:43.435 16:28:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.435 16:28:20 -- common/autotest_common.sh@874 -- # size=4096 00:12:43.435 16:28:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.435 16:28:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:43.435 16:28:20 -- common/autotest_common.sh@877 -- # return 0 00:12:43.435 16:28:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.435 16:28:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:43.435 16:28:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:43.693 /dev/nbd12 00:12:43.693 16:28:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:43.693 16:28:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:43.693 16:28:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:12:43.693 16:28:20 -- common/autotest_common.sh@857 -- # local i 00:12:43.693 16:28:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:43.693 16:28:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:43.693 16:28:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:12:43.693 16:28:20 -- common/autotest_common.sh@861 -- # break 00:12:43.693 16:28:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:43.693 16:28:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:43.693 16:28:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.693 1+0 records in 00:12:43.693 1+0 records out 00:12:43.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602179 s, 6.8 MB/s 00:12:43.693 16:28:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.693 16:28:20 -- common/autotest_common.sh@874 -- # size=4096 00:12:43.693 16:28:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.693 16:28:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:43.693 16:28:20 -- common/autotest_common.sh@877 -- # return 0 00:12:43.693 16:28:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.693 16:28:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:43.693 16:28:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:43.952 /dev/nbd13 00:12:43.952 16:28:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:43.952 16:28:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:43.952 16:28:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:12:43.952 16:28:20 -- common/autotest_common.sh@857 -- # local i 00:12:43.952 16:28:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:43.952 16:28:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:43.952 16:28:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:12:43.952 16:28:20 -- common/autotest_common.sh@861 -- # break 00:12:43.952 16:28:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:43.952 16:28:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:43.952 16:28:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.952 1+0 records in 00:12:43.952 1+0 records out 00:12:43.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623943 s, 6.6 MB/s 00:12:43.952 16:28:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.952 16:28:20 -- common/autotest_common.sh@874 -- # size=4096 00:12:43.952 16:28:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.952 16:28:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:43.952 16:28:20 -- common/autotest_common.sh@877 -- # return 0 00:12:43.952 16:28:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:43.952 16:28:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:43.952 16:28:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:44.210 /dev/nbd14 00:12:44.210 16:28:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:44.210 16:28:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:44.210 16:28:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:12:44.210 16:28:20 -- common/autotest_common.sh@857 -- # local i 00:12:44.210 16:28:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:44.210 16:28:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:44.210 16:28:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:12:44.210 16:28:20 -- common/autotest_common.sh@861 -- # break 00:12:44.210 16:28:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:44.210 16:28:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:44.210 16:28:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.210 1+0 records in 00:12:44.210 1+0 records out 00:12:44.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000362111 s, 11.3 MB/s 00:12:44.210 16:28:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.210 16:28:20 -- common/autotest_common.sh@874 -- # size=4096 00:12:44.210 16:28:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.210 16:28:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:44.210 16:28:20 -- common/autotest_common.sh@877 -- # return 0 00:12:44.210 16:28:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.210 16:28:20 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:44.210 16:28:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:44.469 /dev/nbd15 00:12:44.469 16:28:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:44.469 16:28:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:44.469 16:28:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:12:44.469 16:28:21 -- common/autotest_common.sh@857 -- # local i 00:12:44.469 16:28:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:44.469 16:28:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:44.469 16:28:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:12:44.469 16:28:21 -- common/autotest_common.sh@861 -- # break 00:12:44.469 16:28:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:44.469 16:28:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:44.469 16:28:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.469 1+0 records in 00:12:44.469 1+0 records out 00:12:44.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565967 s, 7.2 MB/s 00:12:44.469 16:28:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.469 16:28:21 -- common/autotest_common.sh@874 -- # size=4096 00:12:44.469 16:28:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.469 16:28:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:44.469 16:28:21 -- common/autotest_common.sh@877 -- # return 0 00:12:44.469 16:28:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.469 16:28:21 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:44.469 16:28:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:44.727 /dev/nbd2 00:12:44.727 16:28:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:44.727 16:28:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:44.727 16:28:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:12:44.727 16:28:21 -- common/autotest_common.sh@857 -- # local i 00:12:44.727 16:28:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:44.727 16:28:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:44.727 16:28:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:12:44.727 16:28:21 -- common/autotest_common.sh@861 -- # break 00:12:44.727 16:28:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:44.727 16:28:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:44.727 16:28:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.727 1+0 records in 00:12:44.727 1+0 records out 00:12:44.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553042 s, 7.4 MB/s 00:12:44.727 16:28:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.727 16:28:21 -- common/autotest_common.sh@874 -- # size=4096 00:12:44.728 16:28:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.728 16:28:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:44.728 16:28:21 -- common/autotest_common.sh@877 -- # return 0 00:12:44.728 16:28:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.728 16:28:21 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:44.728 16:28:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:44.986 /dev/nbd3 00:12:44.986 16:28:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:44.986 16:28:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:44.986 16:28:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:12:44.986 16:28:21 -- common/autotest_common.sh@857 -- # local i 00:12:44.986 16:28:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:44.986 16:28:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:44.986 16:28:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:12:44.986 16:28:21 -- common/autotest_common.sh@861 -- # break 00:12:44.986 16:28:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:44.986 16:28:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:44.986 16:28:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:44.986 1+0 records in 00:12:44.986 1+0 records out 00:12:44.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000568336 s, 7.2 MB/s 00:12:44.986 16:28:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.986 16:28:21 -- common/autotest_common.sh@874 -- # size=4096 00:12:44.986 16:28:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:44.986 16:28:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:44.986 16:28:21 -- common/autotest_common.sh@877 -- # return 0 00:12:44.986 16:28:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.986 16:28:21 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:44.986 16:28:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:45.245 /dev/nbd4 00:12:45.245 16:28:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:45.245 16:28:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:45.245 16:28:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:12:45.245 16:28:21 -- common/autotest_common.sh@857 -- # local i 00:12:45.245 16:28:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:45.245 16:28:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:45.245 16:28:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:12:45.245 16:28:21 -- common/autotest_common.sh@861 -- # break 00:12:45.245 16:28:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:45.245 16:28:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:45.245 16:28:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.245 1+0 records in 00:12:45.245 1+0 records out 00:12:45.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614586 s, 6.7 MB/s 00:12:45.245 16:28:21 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.245 16:28:21 -- common/autotest_common.sh@874 -- # size=4096 00:12:45.245 16:28:21 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.245 16:28:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:45.245 16:28:21 -- common/autotest_common.sh@877 -- # return 0 00:12:45.245 16:28:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.245 16:28:21 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:45.245 16:28:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:45.504 /dev/nbd5 00:12:45.504 16:28:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:45.504 16:28:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:45.504 16:28:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:12:45.504 16:28:22 -- common/autotest_common.sh@857 -- # local i 00:12:45.504 16:28:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:45.504 16:28:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:45.504 16:28:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:12:45.504 16:28:22 -- common/autotest_common.sh@861 -- # break 00:12:45.504 16:28:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:45.504 16:28:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:45.504 16:28:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.504 1+0 records in 00:12:45.504 1+0 records out 00:12:45.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618922 s, 6.6 MB/s 00:12:45.504 16:28:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.504 16:28:22 -- common/autotest_common.sh@874 -- # size=4096 00:12:45.504 16:28:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.504 16:28:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:45.504 16:28:22 -- common/autotest_common.sh@877 -- # return 0 00:12:45.504 16:28:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.504 16:28:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:45.504 16:28:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:45.762 /dev/nbd6 00:12:45.762 16:28:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:45.762 16:28:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:45.762 16:28:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:12:45.762 16:28:22 -- common/autotest_common.sh@857 -- # local i 00:12:45.762 16:28:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:45.762 16:28:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:45.762 16:28:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:12:45.762 16:28:22 -- common/autotest_common.sh@861 -- # break 00:12:45.762 16:28:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:45.762 16:28:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:45.762 16:28:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.762 1+0 records in 00:12:45.762 1+0 records out 00:12:45.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121408 s, 3.4 MB/s 00:12:45.762 16:28:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.762 16:28:22 -- common/autotest_common.sh@874 -- # size=4096 00:12:45.762 16:28:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.762 16:28:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:45.762 16:28:22 -- common/autotest_common.sh@877 -- # return 0 00:12:45.762 16:28:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.762 16:28:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:45.762 16:28:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:46.021 /dev/nbd7 00:12:46.021 16:28:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:46.021 16:28:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:46.021 16:28:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:12:46.021 16:28:22 -- common/autotest_common.sh@857 -- # local i 00:12:46.021 16:28:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:46.021 16:28:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:46.021 16:28:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:12:46.021 16:28:22 -- common/autotest_common.sh@861 -- # break 00:12:46.021 16:28:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:46.021 16:28:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:46.021 16:28:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.021 1+0 records in 00:12:46.021 1+0 records out 00:12:46.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000831533 s, 4.9 MB/s 00:12:46.021 16:28:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.021 16:28:22 -- common/autotest_common.sh@874 -- # size=4096 00:12:46.021 16:28:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.021 16:28:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:46.021 16:28:22 -- common/autotest_common.sh@877 -- # return 0 00:12:46.021 16:28:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.021 16:28:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:46.021 16:28:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:46.280 /dev/nbd8 00:12:46.280 16:28:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:46.280 16:28:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:46.280 16:28:22 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:12:46.280 16:28:22 -- common/autotest_common.sh@857 -- # local i 00:12:46.280 16:28:22 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:46.280 16:28:22 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:46.280 16:28:22 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:12:46.280 16:28:22 -- common/autotest_common.sh@861 -- # break 00:12:46.280 16:28:22 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:46.280 16:28:22 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:46.280 16:28:22 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.280 1+0 records in 00:12:46.280 1+0 records out 00:12:46.280 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818589 s, 5.0 MB/s 00:12:46.280 16:28:22 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.280 16:28:22 -- common/autotest_common.sh@874 -- # size=4096 00:12:46.280 16:28:22 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.280 16:28:22 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:46.280 16:28:22 -- common/autotest_common.sh@877 -- # return 0 00:12:46.280 16:28:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.280 16:28:22 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:46.280 16:28:22 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:46.541 /dev/nbd9 00:12:46.541 16:28:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:46.541 16:28:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:46.541 16:28:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:12:46.541 16:28:23 -- common/autotest_common.sh@857 -- # local i 00:12:46.541 16:28:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:12:46.541 16:28:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:12:46.541 16:28:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:12:46.541 16:28:23 -- common/autotest_common.sh@861 -- # break 00:12:46.541 16:28:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:12:46.541 16:28:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:12:46.541 16:28:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.541 1+0 records in 00:12:46.541 1+0 records out 00:12:46.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111641 s, 3.7 MB/s 00:12:46.541 16:28:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.541 16:28:23 -- common/autotest_common.sh@874 -- # size=4096 00:12:46.541 16:28:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.541 16:28:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:12:46.541 16:28:23 -- common/autotest_common.sh@877 -- # return 0 00:12:46.541 16:28:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.541 16:28:23 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:46.541 16:28:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:46.541 16:28:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.541 16:28:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd0", 00:12:46.800 "bdev_name": "Malloc0" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd1", 00:12:46.800 "bdev_name": "Malloc1p0" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd10", 00:12:46.800 "bdev_name": "Malloc1p1" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd11", 00:12:46.800 "bdev_name": "Malloc2p0" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd12", 00:12:46.800 "bdev_name": "Malloc2p1" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd13", 00:12:46.800 "bdev_name": "Malloc2p2" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd14", 00:12:46.800 "bdev_name": "Malloc2p3" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd15", 00:12:46.800 "bdev_name": "Malloc2p4" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd2", 00:12:46.800 "bdev_name": "Malloc2p5" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd3", 00:12:46.800 "bdev_name": "Malloc2p6" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd4", 00:12:46.800 "bdev_name": "Malloc2p7" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd5", 00:12:46.800 "bdev_name": "TestPT" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd6", 00:12:46.800 "bdev_name": "raid0" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd7", 00:12:46.800 "bdev_name": "concat0" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd8", 00:12:46.800 "bdev_name": "raid1" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd9", 00:12:46.800 "bdev_name": "AIO0" 00:12:46.800 } 00:12:46.800 ]' 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd0", 00:12:46.800 "bdev_name": "Malloc0" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd1", 00:12:46.800 "bdev_name": "Malloc1p0" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd10", 00:12:46.800 "bdev_name": "Malloc1p1" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd11", 00:12:46.800 "bdev_name": "Malloc2p0" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd12", 00:12:46.800 "bdev_name": "Malloc2p1" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd13", 00:12:46.800 "bdev_name": "Malloc2p2" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd14", 00:12:46.800 "bdev_name": "Malloc2p3" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd15", 00:12:46.800 "bdev_name": "Malloc2p4" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd2", 00:12:46.800 "bdev_name": "Malloc2p5" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd3", 00:12:46.800 "bdev_name": "Malloc2p6" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd4", 00:12:46.800 "bdev_name": "Malloc2p7" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd5", 00:12:46.800 "bdev_name": "TestPT" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd6", 00:12:46.800 "bdev_name": "raid0" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd7", 00:12:46.800 "bdev_name": "concat0" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd8", 00:12:46.800 "bdev_name": "raid1" 00:12:46.800 }, 00:12:46.800 { 00:12:46.800 "nbd_device": "/dev/nbd9", 00:12:46.800 "bdev_name": "AIO0" 00:12:46.800 } 00:12:46.800 ]' 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:46.800 /dev/nbd1 00:12:46.800 /dev/nbd10 00:12:46.800 /dev/nbd11 00:12:46.800 /dev/nbd12 00:12:46.800 /dev/nbd13 00:12:46.800 /dev/nbd14 00:12:46.800 /dev/nbd15 00:12:46.800 /dev/nbd2 00:12:46.800 /dev/nbd3 00:12:46.800 /dev/nbd4 00:12:46.800 /dev/nbd5 00:12:46.800 /dev/nbd6 00:12:46.800 /dev/nbd7 00:12:46.800 /dev/nbd8 00:12:46.800 /dev/nbd9' 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:46.800 /dev/nbd1 00:12:46.800 /dev/nbd10 00:12:46.800 /dev/nbd11 00:12:46.800 /dev/nbd12 00:12:46.800 /dev/nbd13 00:12:46.800 /dev/nbd14 00:12:46.800 /dev/nbd15 00:12:46.800 /dev/nbd2 00:12:46.800 /dev/nbd3 00:12:46.800 /dev/nbd4 00:12:46.800 /dev/nbd5 00:12:46.800 /dev/nbd6 00:12:46.800 /dev/nbd7 00:12:46.800 /dev/nbd8 00:12:46.800 /dev/nbd9' 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@65 -- # count=16 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@66 -- # echo 16 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@95 -- # count=16 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:46.800 256+0 records in 00:12:46.800 256+0 records out 00:12:46.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00867098 s, 121 MB/s 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:46.800 16:28:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:47.086 256+0 records in 00:12:47.086 256+0 records out 00:12:47.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120516 s, 8.7 MB/s 00:12:47.086 16:28:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:47.086 16:28:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:47.086 256+0 records in 00:12:47.086 256+0 records out 00:12:47.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125826 s, 8.3 MB/s 00:12:47.086 16:28:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:47.086 16:28:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:47.347 256+0 records in 00:12:47.347 256+0 records out 00:12:47.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143215 s, 7.3 MB/s 00:12:47.347 16:28:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:47.347 16:28:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:47.347 256+0 records in 00:12:47.347 256+0 records out 00:12:47.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13014 s, 8.1 MB/s 00:12:47.347 16:28:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:47.347 16:28:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:47.606 256+0 records in 00:12:47.606 256+0 records out 00:12:47.606 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12905 s, 8.1 MB/s 00:12:47.606 16:28:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:47.606 16:28:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:47.606 256+0 records in 00:12:47.606 256+0 records out 00:12:47.606 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143492 s, 7.3 MB/s 00:12:47.606 16:28:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:47.606 16:28:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:47.865 256+0 records in 00:12:47.865 256+0 records out 00:12:47.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148724 s, 7.1 MB/s 00:12:47.865 16:28:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:47.865 16:28:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:47.865 256+0 records in 00:12:47.865 256+0 records out 00:12:47.865 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145742 s, 7.2 MB/s 00:12:47.865 16:28:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:47.865 16:28:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:48.124 256+0 records in 00:12:48.124 256+0 records out 00:12:48.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136819 s, 7.7 MB/s 00:12:48.124 16:28:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.124 16:28:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:48.124 256+0 records in 00:12:48.124 256+0 records out 00:12:48.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138612 s, 7.6 MB/s 00:12:48.124 16:28:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.124 16:28:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:48.382 256+0 records in 00:12:48.382 256+0 records out 00:12:48.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137191 s, 7.6 MB/s 00:12:48.382 16:28:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.382 16:28:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:48.641 256+0 records in 00:12:48.641 256+0 records out 00:12:48.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138408 s, 7.6 MB/s 00:12:48.641 16:28:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.641 16:28:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:48.641 256+0 records in 00:12:48.641 256+0 records out 00:12:48.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138287 s, 7.6 MB/s 00:12:48.641 16:28:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.641 16:28:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:48.899 256+0 records in 00:12:48.899 256+0 records out 00:12:48.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132436 s, 7.9 MB/s 00:12:48.899 16:28:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.899 16:28:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:48.899 256+0 records in 00:12:48.899 256+0 records out 00:12:48.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.151391 s, 6.9 MB/s 00:12:48.899 16:28:25 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.900 16:28:25 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:49.159 256+0 records in 00:12:49.159 256+0 records out 00:12:49.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.219169 s, 4.8 MB/s 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.159 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:49.418 16:28:25 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:49.418 16:28:25 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:49.418 16:28:25 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:49.418 16:28:25 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:49.418 16:28:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.418 16:28:25 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:49.418 16:28:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:49.418 16:28:25 -- bdev/nbd_common.sh@51 -- # local i 00:12:49.418 16:28:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.418 16:28:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:49.677 16:28:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:49.677 16:28:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:49.677 16:28:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:49.677 16:28:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.677 16:28:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.677 16:28:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:49.677 16:28:26 -- bdev/nbd_common.sh@41 -- # break 00:12:49.677 16:28:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.677 16:28:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.677 16:28:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@41 -- # break 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.935 16:28:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:50.194 16:28:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:50.194 16:28:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:50.194 16:28:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:50.194 16:28:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.194 16:28:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.194 16:28:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:50.194 16:28:26 -- bdev/nbd_common.sh@41 -- # break 00:12:50.194 16:28:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.194 16:28:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.194 16:28:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@41 -- # break 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.453 16:28:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:50.711 16:28:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:50.711 16:28:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:50.711 16:28:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:50.711 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.711 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.711 16:28:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:50.711 16:28:27 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:50.970 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:50.970 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.970 16:28:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:50.970 16:28:27 -- bdev/nbd_common.sh@41 -- # break 00:12:50.970 16:28:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.970 16:28:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.970 16:28:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@41 -- # break 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.229 16:28:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@41 -- # break 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.488 16:28:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@41 -- # break 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.747 16:28:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:52.005 16:28:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:52.005 16:28:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:52.005 16:28:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:52.005 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.005 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.005 16:28:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:52.005 16:28:28 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:52.264 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:52.264 16:28:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.264 16:28:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:52.264 16:28:28 -- bdev/nbd_common.sh@41 -- # break 00:12:52.264 16:28:28 -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.264 16:28:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.264 16:28:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:52.522 16:28:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:52.522 16:28:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:52.522 16:28:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:52.522 16:28:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.522 16:28:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.522 16:28:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:52.522 16:28:29 -- bdev/nbd_common.sh@41 -- # break 00:12:52.522 16:28:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.522 16:28:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.522 16:28:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@41 -- # break 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.781 16:28:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@41 -- # break 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.040 16:28:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:53.299 16:28:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:53.299 16:28:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:53.299 16:28:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:53.299 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.299 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.299 16:28:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:53.299 16:28:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:53.558 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:53.558 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.558 16:28:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:53.558 16:28:30 -- bdev/nbd_common.sh@41 -- # break 00:12:53.558 16:28:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.558 16:28:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.558 16:28:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@41 -- # break 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.816 16:28:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@41 -- # break 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.075 16:28:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:54.333 16:28:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:54.333 16:28:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:54.333 16:28:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:54.333 16:28:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.333 16:28:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.333 16:28:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:54.333 16:28:31 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:54.591 16:28:31 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:54.591 16:28:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.591 16:28:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:54.591 16:28:31 -- bdev/nbd_common.sh@41 -- # break 00:12:54.591 16:28:31 -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.591 16:28:31 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:54.591 16:28:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.591 16:28:31 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@65 -- # true 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@65 -- # count=0 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@104 -- # count=0 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@109 -- # return 0 00:12:54.848 16:28:31 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:54.848 16:28:31 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:55.106 malloc_lvol_verify 00:12:55.106 16:28:31 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:55.365 131c7249-ce72-4d50-bc49-94fd487f7bf4 00:12:55.365 16:28:31 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:55.365 6f998869-39ab-4c5e-8291-d3afb58b3e44 00:12:55.365 16:28:32 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:55.623 /dev/nbd0 00:12:55.623 16:28:32 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:55.623 mke2fs 1.45.5 (07-Jan-2020) 00:12:55.623 00:12:55.623 Filesystem too small for a journal 00:12:55.623 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:55.623 00:12:55.623 Allocating group tables: 0/1 done 00:12:55.623 Writing inode tables: 0/1 done 00:12:55.623 Writing superblocks and filesystem accounting information: 0/1 done 00:12:55.623 00:12:55.623 16:28:32 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:55.623 16:28:32 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:55.623 16:28:32 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:55.623 16:28:32 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:55.623 16:28:32 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:55.623 16:28:32 -- bdev/nbd_common.sh@51 -- # local i 00:12:55.623 16:28:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.623 16:28:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:55.882 16:28:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:55.882 16:28:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:55.882 16:28:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:55.882 16:28:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.882 16:28:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.882 16:28:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:55.882 16:28:32 -- bdev/nbd_common.sh@41 -- # break 00:12:55.882 16:28:32 -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.882 16:28:32 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:55.882 16:28:32 -- bdev/nbd_common.sh@147 -- # return 0 00:12:55.882 16:28:32 -- bdev/blockdev.sh@324 -- # killprocess 111220 00:12:55.882 16:28:32 -- common/autotest_common.sh@926 -- # '[' -z 111220 ']' 00:12:55.882 16:28:32 -- common/autotest_common.sh@930 -- # kill -0 111220 00:12:55.882 16:28:32 -- common/autotest_common.sh@931 -- # uname 00:12:55.882 16:28:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:55.882 16:28:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 111220 00:12:55.882 16:28:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:55.882 16:28:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:55.882 16:28:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 111220' 00:12:55.882 killing process with pid 111220 00:12:55.882 16:28:32 -- common/autotest_common.sh@945 -- # kill 111220 00:12:55.882 16:28:32 -- common/autotest_common.sh@950 -- # wait 111220 00:12:57.784 ************************************ 00:12:57.784 END TEST bdev_nbd 00:12:57.784 ************************************ 00:12:57.784 16:28:34 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:57.784 00:12:57.784 real 0m26.613s 00:12:57.784 user 0m35.005s 00:12:57.784 sys 0m9.176s 00:12:57.784 16:28:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.784 16:28:34 -- common/autotest_common.sh@10 -- # set +x 00:12:57.784 16:28:34 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:57.784 16:28:34 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:57.784 16:28:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:57.784 16:28:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:57.784 16:28:34 -- common/autotest_common.sh@10 -- # set +x 00:12:57.784 ************************************ 00:12:57.784 START TEST bdev_fio 00:12:57.784 ************************************ 00:12:57.784 16:28:34 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@329 -- # local env_context 00:12:57.784 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:57.784 16:28:34 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:57.784 16:28:34 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:57.784 16:28:34 -- bdev/blockdev.sh@337 -- # echo '' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:57.784 16:28:34 -- bdev/blockdev.sh@337 -- # env_context= 00:12:57.784 16:28:34 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:57.784 16:28:34 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:57.784 16:28:34 -- common/autotest_common.sh@1260 -- # local workload=verify 00:12:57.784 16:28:34 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:12:57.784 16:28:34 -- common/autotest_common.sh@1262 -- # local env_context= 00:12:57.784 16:28:34 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:12:57.784 16:28:34 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:57.784 16:28:34 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:12:57.784 16:28:34 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:12:57.784 16:28:34 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:57.784 16:28:34 -- common/autotest_common.sh@1280 -- # cat 00:12:57.784 16:28:34 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:12:57.784 16:28:34 -- common/autotest_common.sh@1293 -- # cat 00:12:57.784 16:28:34 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:12:57.784 16:28:34 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:12:57.784 16:28:34 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:57.784 16:28:34 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:12:57.784 16:28:34 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:57.784 16:28:34 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:12:57.784 16:28:34 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:57.784 16:28:34 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:57.784 16:28:34 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:12:57.784 16:28:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:57.784 16:28:34 -- common/autotest_common.sh@10 -- # set +x 00:12:57.784 ************************************ 00:12:57.784 START TEST bdev_fio_rw_verify 00:12:57.784 ************************************ 00:12:57.785 16:28:34 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:57.785 16:28:34 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:57.785 16:28:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:57.785 16:28:34 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:12:57.785 16:28:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:57.785 16:28:34 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.785 16:28:34 -- common/autotest_common.sh@1320 -- # shift 00:12:57.785 16:28:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:57.785 16:28:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:57.785 16:28:34 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:57.785 16:28:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:57.785 16:28:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:57.785 16:28:34 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:12:57.785 16:28:34 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:12:57.785 16:28:34 -- common/autotest_common.sh@1326 -- # break 00:12:57.785 16:28:34 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:57.785 16:28:34 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:58.043 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:58.043 fio-3.35 00:12:58.043 Starting 16 threads 00:13:10.258 00:13:10.258 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=112498: Thu Jul 11 16:28:45 2024 00:13:10.258 read: IOPS=72.0k, BW=281MiB/s (295MB/s)(2814MiB/10001msec) 00:13:10.258 slat (usec): min=2, max=38551, avg=41.14, stdev=440.57 00:13:10.258 clat (usec): min=9, max=48246, avg=330.35, stdev=1297.63 00:13:10.258 lat (usec): min=22, max=48269, avg=371.49, stdev=1369.96 00:13:10.258 clat percentiles (usec): 00:13:10.258 | 50.000th=[ 198], 99.000th=[ 1123], 99.900th=[16319], 99.990th=[24249], 00:13:10.258 | 99.999th=[47973] 00:13:10.258 write: IOPS=117k, BW=457MiB/s (479MB/s)(4509MiB/9875msec); 0 zone resets 00:13:10.258 slat (usec): min=4, max=44049, avg=65.95, stdev=567.76 00:13:10.258 clat (usec): min=9, max=44411, avg=403.42, stdev=1424.03 00:13:10.258 lat (usec): min=35, max=44466, avg=469.37, stdev=1532.94 00:13:10.258 clat percentiles (usec): 00:13:10.258 | 50.000th=[ 245], 99.000th=[ 4228], 99.900th=[16450], 99.990th=[24511], 00:13:10.258 | 99.999th=[36963] 00:13:10.258 bw ( KiB/s): min=280224, max=735160, per=98.97%, avg=462689.89, stdev=8975.84, samples=304 00:13:10.258 iops : min=70056, max=183790, avg=115672.37, stdev=2243.96, samples=304 00:13:10.258 lat (usec) : 10=0.01%, 20=0.01%, 50=0.64%, 100=7.96%, 250=50.16% 00:13:10.258 lat (usec) : 500=36.08%, 750=2.63%, 1000=0.94% 00:13:10.258 lat (msec) : 2=0.53%, 4=0.11%, 10=0.22%, 20=0.66%, 50=0.04% 00:13:10.258 cpu : usr=57.97%, sys=2.18%, ctx=224036, majf=0, minf=80091 00:13:10.258 IO depths : 1=11.5%, 2=23.9%, 4=51.6%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:10.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.258 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:10.258 issued rwts: total=720341,1154206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:10.258 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:10.258 00:13:10.258 Run status group 0 (all jobs): 00:13:10.258 READ: bw=281MiB/s (295MB/s), 281MiB/s-281MiB/s (295MB/s-295MB/s), io=2814MiB (2951MB), run=10001-10001msec 00:13:10.258 WRITE: bw=457MiB/s (479MB/s), 457MiB/s-457MiB/s (479MB/s-479MB/s), io=4509MiB (4728MB), run=9875-9875msec 00:13:11.636 ----------------------------------------------------- 00:13:11.636 Suppressions used: 00:13:11.636 count bytes template 00:13:11.636 16 140 /usr/src/fio/parse.c 00:13:11.636 11428 1097088 /usr/src/fio/iolog.c 00:13:11.636 2 596 libcrypto.so 00:13:11.636 ----------------------------------------------------- 00:13:11.636 00:13:11.636 00:13:11.636 real 0m13.807s 00:13:11.636 user 1m37.637s 00:13:11.636 sys 0m4.429s 00:13:11.636 16:28:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.636 ************************************ 00:13:11.636 END TEST bdev_fio_rw_verify 00:13:11.636 ************************************ 00:13:11.636 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:13:11.636 16:28:48 -- bdev/blockdev.sh@348 -- # rm -f 00:13:11.636 16:28:48 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.636 16:28:48 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:11.636 16:28:48 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.636 16:28:48 -- common/autotest_common.sh@1260 -- # local workload=trim 00:13:11.636 16:28:48 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:13:11.636 16:28:48 -- common/autotest_common.sh@1262 -- # local env_context= 00:13:11.636 16:28:48 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:13:11.636 16:28:48 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:11.636 16:28:48 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:13:11.636 16:28:48 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:13:11.636 16:28:48 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:11.636 16:28:48 -- common/autotest_common.sh@1280 -- # cat 00:13:11.636 16:28:48 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:13:11.636 16:28:48 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:13:11.636 16:28:48 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:13:11.636 16:28:48 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:11.637 16:28:48 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3b5fd810-8020-4360-87d6-26f55343a79b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3b5fd810-8020-4360-87d6-26f55343a79b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "7dbc8af3-0430-5e80-9afd-ba058e41d14b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "7dbc8af3-0430-5e80-9afd-ba058e41d14b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2d2c27df-2aa8-53bf-af4b-cfbc2a09d477"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2d2c27df-2aa8-53bf-af4b-cfbc2a09d477",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "742fcb8e-21d8-5342-a0f0-fb9db9d8bbff"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "742fcb8e-21d8-5342-a0f0-fb9db9d8bbff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "e26058ad-004d-5e5e-8a9e-e4b9975a79f0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e26058ad-004d-5e5e-8a9e-e4b9975a79f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6e74d207-aef4-5ee4-9a87-397a72c88c58"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6e74d207-aef4-5ee4-9a87-397a72c88c58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "93f00d40-9d63-5451-9c17-6662291cdaf8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "93f00d40-9d63-5451-9c17-6662291cdaf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cf3147ae-fc78-5542-96f3-949c8ae504fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cf3147ae-fc78-5542-96f3-949c8ae504fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e8758ca2-1ea1-5e97-8297-cdb49b1a9859"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e8758ca2-1ea1-5e97-8297-cdb49b1a9859",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "2eaebf22-a080-5a78-8793-6cc3aa51b58e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2eaebf22-a080-5a78-8793-6cc3aa51b58e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "e8736bcd-cb04-52ae-9c79-a6317812a96d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e8736bcd-cb04-52ae-9c79-a6317812a96d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "7d2a97ce-79fe-51d1-9296-10829d770d23"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7d2a97ce-79fe-51d1-9296-10829d770d23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d40ffbd1-ee83-40f4-9b6d-db253f5deec1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d40ffbd1-ee83-40f4-9b6d-db253f5deec1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d40ffbd1-ee83-40f4-9b6d-db253f5deec1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "dd8c9277-a081-4ea4-a309-eca950975e48",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "04ea45e0-5274-4ada-a0e3-1a39ddae58ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "9a63579c-7158-4366-a70d-e13ec90f74db"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9a63579c-7158-4366-a70d-e13ec90f74db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9a63579c-7158-4366-a70d-e13ec90f74db",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6e8b82bd-3b33-4a58-b4e6-7b9023d47713",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "54c000da-a792-4655-b49b-c9a16e5a23d1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b89f1ad2-aa19-407e-8270-f89ca89f00ca"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b89f1ad2-aa19-407e-8270-f89ca89f00ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b89f1ad2-aa19-407e-8270-f89ca89f00ca",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e88acdfe-5e9a-4c33-87d2-d3dc8319cfbd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "4e7e1fff-8fb4-4ffd-9b68-3f2f24154dce",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e8c732c5-2bc0-4d88-ae25-d4b7de9ae5ad"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e8c732c5-2bc0-4d88-ae25-d4b7de9ae5ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:11.637 16:28:48 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:13:11.637 Malloc1p0 00:13:11.637 Malloc1p1 00:13:11.637 Malloc2p0 00:13:11.637 Malloc2p1 00:13:11.637 Malloc2p2 00:13:11.637 Malloc2p3 00:13:11.637 Malloc2p4 00:13:11.637 Malloc2p5 00:13:11.637 Malloc2p6 00:13:11.637 Malloc2p7 00:13:11.637 TestPT 00:13:11.637 raid0 00:13:11.637 concat0 ]] 00:13:11.637 16:28:48 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:11.638 16:28:48 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "3b5fd810-8020-4360-87d6-26f55343a79b"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3b5fd810-8020-4360-87d6-26f55343a79b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "7dbc8af3-0430-5e80-9afd-ba058e41d14b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "7dbc8af3-0430-5e80-9afd-ba058e41d14b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2d2c27df-2aa8-53bf-af4b-cfbc2a09d477"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2d2c27df-2aa8-53bf-af4b-cfbc2a09d477",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "742fcb8e-21d8-5342-a0f0-fb9db9d8bbff"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "742fcb8e-21d8-5342-a0f0-fb9db9d8bbff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "e26058ad-004d-5e5e-8a9e-e4b9975a79f0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e26058ad-004d-5e5e-8a9e-e4b9975a79f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "6e74d207-aef4-5ee4-9a87-397a72c88c58"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6e74d207-aef4-5ee4-9a87-397a72c88c58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "93f00d40-9d63-5451-9c17-6662291cdaf8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "93f00d40-9d63-5451-9c17-6662291cdaf8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cf3147ae-fc78-5542-96f3-949c8ae504fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cf3147ae-fc78-5542-96f3-949c8ae504fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e8758ca2-1ea1-5e97-8297-cdb49b1a9859"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e8758ca2-1ea1-5e97-8297-cdb49b1a9859",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "2eaebf22-a080-5a78-8793-6cc3aa51b58e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2eaebf22-a080-5a78-8793-6cc3aa51b58e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "e8736bcd-cb04-52ae-9c79-a6317812a96d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e8736bcd-cb04-52ae-9c79-a6317812a96d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "7d2a97ce-79fe-51d1-9296-10829d770d23"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "7d2a97ce-79fe-51d1-9296-10829d770d23",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "d40ffbd1-ee83-40f4-9b6d-db253f5deec1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "d40ffbd1-ee83-40f4-9b6d-db253f5deec1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "d40ffbd1-ee83-40f4-9b6d-db253f5deec1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "dd8c9277-a081-4ea4-a309-eca950975e48",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "04ea45e0-5274-4ada-a0e3-1a39ddae58ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "9a63579c-7158-4366-a70d-e13ec90f74db"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "9a63579c-7158-4366-a70d-e13ec90f74db",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "9a63579c-7158-4366-a70d-e13ec90f74db",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "6e8b82bd-3b33-4a58-b4e6-7b9023d47713",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "54c000da-a792-4655-b49b-c9a16e5a23d1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b89f1ad2-aa19-407e-8270-f89ca89f00ca"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b89f1ad2-aa19-407e-8270-f89ca89f00ca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b89f1ad2-aa19-407e-8270-f89ca89f00ca",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "e88acdfe-5e9a-4c33-87d2-d3dc8319cfbd",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "4e7e1fff-8fb4-4ffd-9b68-3f2f24154dce",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "e8c732c5-2bc0-4d88-ae25-d4b7de9ae5ad"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "e8c732c5-2bc0-4d88-ae25-d4b7de9ae5ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:13:11.638 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.638 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:13:11.638 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:13:11.638 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.638 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:13:11.638 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:13:11.638 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.638 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:13:11.638 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:13:11.638 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.639 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:13:11.639 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:13:11.639 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.639 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:13:11.639 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:13:11.639 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.639 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:13:11.639 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:13:11.639 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.639 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:13:11.639 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:13:11.639 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.639 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:13:11.639 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:13:11.639 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.639 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:13:11.639 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:13:11.639 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.639 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:13:11.639 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:13:11.639 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.639 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:13:11.639 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:13:11.639 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.639 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:13:11.639 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:13:11.639 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.639 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:13:11.639 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:13:11.639 16:28:48 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:11.639 16:28:48 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:13:11.639 16:28:48 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:13:11.639 16:28:48 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.639 16:28:48 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:13:11.639 16:28:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:11.639 16:28:48 -- common/autotest_common.sh@10 -- # set +x 00:13:11.639 ************************************ 00:13:11.639 START TEST bdev_fio_trim 00:13:11.639 ************************************ 00:13:11.639 16:28:48 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.639 16:28:48 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.639 16:28:48 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:13:11.639 16:28:48 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:13:11.639 16:28:48 -- common/autotest_common.sh@1318 -- # local sanitizers 00:13:11.639 16:28:48 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.639 16:28:48 -- common/autotest_common.sh@1320 -- # shift 00:13:11.639 16:28:48 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:13:11.639 16:28:48 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:13:11.639 16:28:48 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:11.639 16:28:48 -- common/autotest_common.sh@1324 -- # grep libasan 00:13:11.639 16:28:48 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:13:11.639 16:28:48 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:13:11.639 16:28:48 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:13:11.639 16:28:48 -- common/autotest_common.sh@1326 -- # break 00:13:11.639 16:28:48 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:11.639 16:28:48 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:11.898 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:11.898 fio-3.35 00:13:11.898 Starting 14 threads 00:13:24.102 00:13:24.102 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=112744: Thu Jul 11 16:28:59 2024 00:13:24.102 write: IOPS=129k, BW=503MiB/s (527MB/s)(5029MiB/10003msec); 0 zone resets 00:13:24.102 slat (usec): min=2, max=32054, avg=38.53, stdev=407.38 00:13:24.102 clat (usec): min=23, max=37750, avg=284.03, stdev=1187.40 00:13:24.102 lat (usec): min=30, max=37800, avg=322.56, stdev=1254.59 00:13:24.102 clat percentiles (usec): 00:13:24.102 | 50.000th=[ 188], 99.000th=[ 478], 99.900th=[16319], 99.990th=[20317], 00:13:24.102 | 99.999th=[32375] 00:13:24.102 bw ( KiB/s): min=376960, max=647472, per=100.00%, avg=515440.79, stdev=6782.62, samples=266 00:13:24.102 iops : min=94240, max=161868, avg=128860.16, stdev=1695.65, samples=266 00:13:24.102 trim: IOPS=129k, BW=503MiB/s (527MB/s)(5029MiB/10003msec); 0 zone resets 00:13:24.103 slat (usec): min=4, max=28037, avg=25.48, stdev=330.41 00:13:24.103 clat (usec): min=4, max=37801, avg=296.90, stdev=1144.05 00:13:24.103 lat (usec): min=10, max=37827, avg=322.39, stdev=1190.54 00:13:24.103 clat percentiles (usec): 00:13:24.103 | 50.000th=[ 210], 99.000th=[ 400], 99.900th=[16319], 99.990th=[20317], 00:13:24.103 | 99.999th=[28181] 00:13:24.103 bw ( KiB/s): min=376968, max=647472, per=100.00%, avg=515440.79, stdev=6782.55, samples=266 00:13:24.103 iops : min=94242, max=161868, avg=128860.16, stdev=1695.63, samples=266 00:13:24.103 lat (usec) : 10=0.09%, 20=0.26%, 50=1.24%, 100=6.49%, 250=63.37% 00:13:24.103 lat (usec) : 500=27.81%, 750=0.13%, 1000=0.01% 00:13:24.103 lat (msec) : 2=0.02%, 4=0.01%, 10=0.04%, 20=0.51%, 50=0.01% 00:13:24.103 cpu : usr=69.20%, sys=0.47%, ctx=173889, majf=0, minf=708 00:13:24.103 IO depths : 1=12.3%, 2=24.6%, 4=50.0%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.103 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.103 issued rwts: total=0,1287444,1287448,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.103 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:24.103 00:13:24.103 Run status group 0 (all jobs): 00:13:24.103 WRITE: bw=503MiB/s (527MB/s), 503MiB/s-503MiB/s (527MB/s-527MB/s), io=5029MiB (5273MB), run=10003-10003msec 00:13:24.103 TRIM: bw=503MiB/s (527MB/s), 503MiB/s-503MiB/s (527MB/s-527MB/s), io=5029MiB (5273MB), run=10003-10003msec 00:13:25.039 ----------------------------------------------------- 00:13:25.039 Suppressions used: 00:13:25.039 count bytes template 00:13:25.039 14 129 /usr/src/fio/parse.c 00:13:25.039 2 596 libcrypto.so 00:13:25.039 ----------------------------------------------------- 00:13:25.039 00:13:25.039 00:13:25.039 real 0m13.390s 00:13:25.039 user 1m41.685s 00:13:25.039 sys 0m1.476s 00:13:25.039 16:29:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.039 16:29:01 -- common/autotest_common.sh@10 -- # set +x 00:13:25.039 ************************************ 00:13:25.039 END TEST bdev_fio_trim 00:13:25.039 ************************************ 00:13:25.039 16:29:01 -- bdev/blockdev.sh@366 -- # rm -f 00:13:25.039 16:29:01 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:25.039 /home/vagrant/spdk_repo/spdk 00:13:25.039 16:29:01 -- bdev/blockdev.sh@368 -- # popd 00:13:25.039 16:29:01 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:13:25.039 00:13:25.039 real 0m27.488s 00:13:25.039 user 3m19.512s 00:13:25.039 sys 0m5.997s 00:13:25.039 ************************************ 00:13:25.039 END TEST bdev_fio 00:13:25.039 ************************************ 00:13:25.039 16:29:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:25.039 16:29:01 -- common/autotest_common.sh@10 -- # set +x 00:13:25.298 16:29:01 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:25.298 16:29:01 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:25.298 16:29:01 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:25.298 16:29:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:25.298 16:29:01 -- common/autotest_common.sh@10 -- # set +x 00:13:25.298 ************************************ 00:13:25.298 START TEST bdev_verify 00:13:25.298 ************************************ 00:13:25.298 16:29:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:25.298 [2024-07-11 16:29:01.929949] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:25.298 [2024-07-11 16:29:01.930671] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112958 ] 00:13:25.298 [2024-07-11 16:29:02.098608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:25.557 [2024-07-11 16:29:02.304386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.557 [2024-07-11 16:29:02.304401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.124 [2024-07-11 16:29:02.627254] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:26.124 [2024-07-11 16:29:02.627371] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:26.124 [2024-07-11 16:29:02.635238] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:26.124 [2024-07-11 16:29:02.635331] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:26.124 [2024-07-11 16:29:02.643275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:26.124 [2024-07-11 16:29:02.643336] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:26.124 [2024-07-11 16:29:02.643391] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:26.124 [2024-07-11 16:29:02.813615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:26.124 [2024-07-11 16:29:02.813770] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.124 [2024-07-11 16:29:02.813829] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:26.124 [2024-07-11 16:29:02.813850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.124 [2024-07-11 16:29:02.816678] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.124 [2024-07-11 16:29:02.816740] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:26.383 Running I/O for 5 seconds... 00:13:32.947 00:13:32.947 Latency(us) 00:13:32.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:32.947 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x1000 00:13:32.947 Malloc0 : 5.19 1346.68 5.26 0.00 0.00 94258.12 2755.49 249751.74 00:13:32.947 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x1000 length 0x1000 00:13:32.947 Malloc0 : 5.21 1269.29 4.96 0.00 0.00 100349.80 2636.33 266910.25 00:13:32.947 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x800 00:13:32.947 Malloc1p0 : 5.19 944.83 3.69 0.00 0.00 134219.24 4676.89 146800.64 00:13:32.947 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x800 length 0x800 00:13:32.947 Malloc1p0 : 5.21 891.16 3.48 0.00 0.00 142676.14 5779.08 151566.89 00:13:32.947 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x800 00:13:32.947 Malloc1p1 : 5.20 944.55 3.69 0.00 0.00 133977.73 4587.52 142034.39 00:13:32.947 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x800 length 0x800 00:13:32.947 Malloc1p1 : 5.21 890.94 3.48 0.00 0.00 142393.02 5123.72 147753.89 00:13:32.947 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x200 00:13:32.947 Malloc2p0 : 5.20 944.31 3.69 0.00 0.00 133734.64 4617.31 137268.13 00:13:32.947 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x200 length 0x200 00:13:32.947 Malloc2p0 : 5.21 890.69 3.48 0.00 0.00 142169.33 5153.51 142987.64 00:13:32.947 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x200 00:13:32.947 Malloc2p1 : 5.20 944.05 3.69 0.00 0.00 133521.55 4438.57 132501.88 00:13:32.947 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x200 length 0x200 00:13:32.947 Malloc2p1 : 5.21 890.49 3.48 0.00 0.00 141905.39 4944.99 139174.63 00:13:32.947 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x200 00:13:32.947 Malloc2p2 : 5.20 943.79 3.69 0.00 0.00 133316.96 4289.63 128688.87 00:13:32.947 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x200 length 0x200 00:13:32.947 Malloc2p2 : 5.21 890.26 3.48 0.00 0.00 141650.06 4915.20 135361.63 00:13:32.947 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x200 00:13:32.947 Malloc2p3 : 5.20 943.56 3.69 0.00 0.00 133114.74 4617.31 122969.37 00:13:32.947 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x200 length 0x200 00:13:32.947 Malloc2p3 : 5.22 889.94 3.48 0.00 0.00 141436.58 5034.36 131548.63 00:13:32.947 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x200 00:13:32.947 Malloc2p4 : 5.20 943.32 3.68 0.00 0.00 132889.97 4319.42 119156.36 00:13:32.947 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x200 length 0x200 00:13:32.947 Malloc2p4 : 5.22 889.38 3.47 0.00 0.00 141197.26 4885.41 127735.62 00:13:32.947 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x200 00:13:32.947 Malloc2p5 : 5.20 943.08 3.68 0.00 0.00 132690.94 4468.36 115343.36 00:13:32.947 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x200 length 0x200 00:13:32.947 Malloc2p5 : 5.22 888.82 3.47 0.00 0.00 140972.65 4885.41 122969.37 00:13:32.947 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x200 00:13:32.947 Malloc2p6 : 5.20 942.83 3.68 0.00 0.00 132473.85 4527.94 110577.11 00:13:32.947 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x200 length 0x200 00:13:32.947 Malloc2p6 : 5.23 888.27 3.47 0.00 0.00 140718.47 5004.57 118679.74 00:13:32.947 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x200 00:13:32.947 Malloc2p7 : 5.21 942.59 3.68 0.00 0.00 132283.46 4259.84 106287.48 00:13:32.947 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x200 length 0x200 00:13:32.947 Malloc2p7 : 5.23 887.64 3.47 0.00 0.00 140512.57 3842.79 115819.99 00:13:32.947 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x1000 00:13:32.947 TestPT : 5.21 947.10 3.70 0.00 0.00 132071.91 4259.84 106287.48 00:13:32.947 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x1000 length 0x1000 00:13:32.947 TestPT : 5.23 872.44 3.41 0.00 0.00 142736.23 30146.56 117249.86 00:13:32.947 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x2000 00:13:32.947 raid0 : 5.22 956.35 3.74 0.00 0.00 130505.14 3053.38 100091.35 00:13:32.947 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x2000 length 0x2000 00:13:32.947 raid0 : 5.23 887.18 3.47 0.00 0.00 140090.30 5123.72 108670.60 00:13:32.947 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x2000 00:13:32.947 concat0 : 5.22 955.77 3.73 0.00 0.00 130333.76 4587.52 95325.09 00:13:32.947 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x2000 length 0x2000 00:13:32.947 concat0 : 5.23 887.02 3.46 0.00 0.00 139835.65 5064.15 106287.48 00:13:32.947 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x1000 00:13:32.947 raid1 : 5.22 955.19 3.73 0.00 0.00 130127.78 5064.15 95325.09 00:13:32.947 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x1000 length 0x1000 00:13:32.947 raid1 : 5.23 886.85 3.46 0.00 0.00 139581.54 4944.99 105810.85 00:13:32.947 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x0 length 0x4e2 00:13:32.947 AIO0 : 5.23 954.26 3.73 0.00 0.00 129891.08 4944.99 95801.72 00:13:32.947 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:32.947 Verification LBA range: start 0x4e2 length 0x4e2 00:13:32.947 AIO0 : 5.24 886.50 3.46 0.00 0.00 139360.75 3991.74 105810.85 00:13:32.947 =================================================================================================================== 00:13:32.947 Total : 30139.13 117.73 0.00 0.00 133195.29 2636.33 266910.25 00:13:33.515 00:13:33.515 real 0m8.292s 00:13:33.515 user 0m14.872s 00:13:33.515 sys 0m0.657s 00:13:33.515 ************************************ 00:13:33.515 16:29:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:33.515 16:29:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.515 END TEST bdev_verify 00:13:33.515 ************************************ 00:13:33.515 16:29:10 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:33.515 16:29:10 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:13:33.515 16:29:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:33.515 16:29:10 -- common/autotest_common.sh@10 -- # set +x 00:13:33.515 ************************************ 00:13:33.515 START TEST bdev_verify_big_io 00:13:33.515 ************************************ 00:13:33.515 16:29:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:33.515 [2024-07-11 16:29:10.285042] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:33.515 [2024-07-11 16:29:10.285748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113099 ] 00:13:33.774 [2024-07-11 16:29:10.456845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:34.032 [2024-07-11 16:29:10.613399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.032 [2024-07-11 16:29:10.613413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.291 [2024-07-11 16:29:10.937081] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:34.291 [2024-07-11 16:29:10.937189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:34.291 [2024-07-11 16:29:10.945069] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:34.291 [2024-07-11 16:29:10.945164] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:34.291 [2024-07-11 16:29:10.953098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:34.291 [2024-07-11 16:29:10.953162] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:34.291 [2024-07-11 16:29:10.953204] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:34.550 [2024-07-11 16:29:11.130663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:34.550 [2024-07-11 16:29:11.130808] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.550 [2024-07-11 16:29:11.130866] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:34.550 [2024-07-11 16:29:11.130886] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.550 [2024-07-11 16:29:11.133527] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.550 [2024-07-11 16:29:11.133588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:34.808 [2024-07-11 16:29:11.443554] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:34.808 [2024-07-11 16:29:11.446575] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:34.808 [2024-07-11 16:29:11.450081] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:34.808 [2024-07-11 16:29:11.453621] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:34.808 [2024-07-11 16:29:11.456509] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:34.808 [2024-07-11 16:29:11.459923] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:34.808 [2024-07-11 16:29:11.462848] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:34.809 [2024-07-11 16:29:11.466350] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:34.809 [2024-07-11 16:29:11.469306] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:34.809 [2024-07-11 16:29:11.472625] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:34.809 [2024-07-11 16:29:11.475533] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:34.809 [2024-07-11 16:29:11.478835] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:34.809 [2024-07-11 16:29:11.481839] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:34.809 [2024-07-11 16:29:11.485235] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:34.809 [2024-07-11 16:29:11.488740] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:34.809 [2024-07-11 16:29:11.491633] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:34.809 [2024-07-11 16:29:11.562995] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:34.809 [2024-07-11 16:29:11.568657] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:34.809 Running I/O for 5 seconds... 00:13:41.369 00:13:41.369 Latency(us) 00:13:41.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.369 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x100 00:13:41.369 Malloc0 : 5.39 512.43 32.03 0.00 0.00 243722.49 14417.92 701592.67 00:13:41.369 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x100 length 0x100 00:13:41.369 Malloc0 : 5.41 464.68 29.04 0.00 0.00 268008.23 16324.42 827421.79 00:13:41.369 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x80 00:13:41.369 Malloc1p0 : 5.47 279.91 17.49 0.00 0.00 439628.45 33125.47 846486.81 00:13:41.369 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x80 length 0x80 00:13:41.369 Malloc1p0 : 5.48 341.53 21.35 0.00 0.00 360224.73 32648.84 743535.71 00:13:41.369 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x80 00:13:41.369 Malloc1p1 : 5.67 162.56 10.16 0.00 0.00 747256.44 31933.91 1502323.43 00:13:41.369 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x80 length 0x80 00:13:41.369 Malloc1p1 : 5.65 157.40 9.84 0.00 0.00 774977.71 32648.84 1609087.53 00:13:41.369 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x20 00:13:41.369 Malloc2p0 : 5.47 94.12 5.88 0.00 0.00 323220.78 5064.15 556698.53 00:13:41.369 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x20 length 0x20 00:13:41.369 Malloc2p0 : 5.48 90.51 5.66 0.00 0.00 337173.59 5034.36 484251.46 00:13:41.369 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x20 00:13:41.369 Malloc2p1 : 5.47 94.09 5.88 0.00 0.00 322245.83 5600.35 549072.52 00:13:41.369 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x20 length 0x20 00:13:41.369 Malloc2p1 : 5.48 90.44 5.65 0.00 0.00 336232.44 5659.93 474718.95 00:13:41.369 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x20 00:13:41.369 Malloc2p2 : 5.47 94.07 5.88 0.00 0.00 321219.50 5957.82 537633.51 00:13:41.369 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x20 length 0x20 00:13:41.369 Malloc2p2 : 5.49 90.36 5.65 0.00 0.00 335312.73 5928.03 465186.44 00:13:41.369 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x20 00:13:41.369 Malloc2p3 : 5.48 94.04 5.88 0.00 0.00 320219.16 4915.20 526194.50 00:13:41.369 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x20 length 0x20 00:13:41.369 Malloc2p3 : 5.49 90.34 5.65 0.00 0.00 334266.30 6285.50 453747.43 00:13:41.369 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x20 00:13:41.369 Malloc2p4 : 5.48 93.99 5.87 0.00 0.00 319239.04 5213.09 514755.49 00:13:41.369 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x20 length 0x20 00:13:41.369 Malloc2p4 : 5.49 90.32 5.64 0.00 0.00 333210.32 5540.77 444214.92 00:13:41.369 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x20 00:13:41.369 Malloc2p5 : 5.48 93.97 5.87 0.00 0.00 318261.79 5838.66 503316.48 00:13:41.369 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x20 length 0x20 00:13:41.369 Malloc2p5 : 5.49 90.30 5.64 0.00 0.00 332211.58 5928.03 432775.91 00:13:41.369 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x20 00:13:41.369 Malloc2p6 : 5.48 93.90 5.87 0.00 0.00 317238.75 5689.72 491877.47 00:13:41.369 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x20 length 0x20 00:13:41.369 Malloc2p6 : 5.49 90.28 5.64 0.00 0.00 331183.82 5689.72 423243.40 00:13:41.369 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x20 00:13:41.369 Malloc2p7 : 5.54 97.13 6.07 0.00 0.00 307235.46 5332.25 480438.46 00:13:41.369 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x20 length 0x20 00:13:41.369 Malloc2p7 : 5.50 90.26 5.64 0.00 0.00 330170.51 5481.19 411804.39 00:13:41.369 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x100 00:13:41.369 TestPT : 5.65 169.13 10.57 0.00 0.00 694640.22 32172.22 1487071.42 00:13:41.369 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x100 length 0x100 00:13:41.369 TestPT : 5.66 147.58 9.22 0.00 0.00 794102.24 44564.48 1631965.56 00:13:41.369 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x0 length 0x200 00:13:41.369 raid0 : 5.68 173.69 10.86 0.00 0.00 671093.83 32410.53 1494697.43 00:13:41.369 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.369 Verification LBA range: start 0x200 length 0x200 00:13:41.370 raid0 : 5.66 162.92 10.18 0.00 0.00 717516.97 32410.53 1601461.53 00:13:41.370 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.370 Verification LBA range: start 0x0 length 0x200 00:13:41.370 concat0 : 5.68 179.07 11.19 0.00 0.00 643385.49 30146.56 1502323.43 00:13:41.370 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.370 Verification LBA range: start 0x200 length 0x200 00:13:41.370 concat0 : 5.67 168.76 10.55 0.00 0.00 686188.72 31218.97 1593835.52 00:13:41.370 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:41.370 Verification LBA range: start 0x0 length 0x100 00:13:41.370 raid1 : 5.68 183.97 11.50 0.00 0.00 619495.07 26452.71 1509949.44 00:13:41.370 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:41.370 Verification LBA range: start 0x100 length 0x100 00:13:41.370 raid1 : 5.70 173.28 10.83 0.00 0.00 657655.30 19899.11 1609087.53 00:13:41.370 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:41.370 Verification LBA range: start 0x0 length 0x4e 00:13:41.370 AIO0 : 5.68 186.93 11.68 0.00 0.00 367375.14 3515.11 865551.83 00:13:41.370 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:41.370 Verification LBA range: start 0x4e length 0x4e 00:13:41.370 AIO0 : 5.66 190.55 11.91 0.00 0.00 363639.91 5093.93 930372.89 00:13:41.370 =================================================================================================================== 00:13:41.370 Total : 5132.50 320.78 0.00 0.00 448425.82 3515.11 1631965.56 00:13:42.744 00:13:42.744 real 0m9.039s 00:13:42.744 user 0m16.651s 00:13:42.744 sys 0m0.481s 00:13:42.744 ************************************ 00:13:42.744 END TEST bdev_verify_big_io 00:13:42.744 ************************************ 00:13:42.744 16:29:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:42.744 16:29:19 -- common/autotest_common.sh@10 -- # set +x 00:13:42.744 16:29:19 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:42.744 16:29:19 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:42.744 16:29:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:42.744 16:29:19 -- common/autotest_common.sh@10 -- # set +x 00:13:42.744 ************************************ 00:13:42.744 START TEST bdev_write_zeroes 00:13:42.744 ************************************ 00:13:42.744 16:29:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:42.744 [2024-07-11 16:29:19.377186] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:42.744 [2024-07-11 16:29:19.377376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113229 ] 00:13:42.744 [2024-07-11 16:29:19.542652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.002 [2024-07-11 16:29:19.697607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.259 [2024-07-11 16:29:20.019110] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:43.259 [2024-07-11 16:29:20.019218] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:43.259 [2024-07-11 16:29:20.027066] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:43.259 [2024-07-11 16:29:20.027147] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:43.259 [2024-07-11 16:29:20.035085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:43.259 [2024-07-11 16:29:20.035128] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:43.259 [2024-07-11 16:29:20.035169] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:43.517 [2024-07-11 16:29:20.207560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:43.517 [2024-07-11 16:29:20.207677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.517 [2024-07-11 16:29:20.207724] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:43.517 [2024-07-11 16:29:20.207749] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.517 [2024-07-11 16:29:20.210064] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.517 [2024-07-11 16:29:20.210116] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:43.775 Running I/O for 1 seconds... 00:13:45.150 00:13:45.150 Latency(us) 00:13:45.150 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.150 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 Malloc0 : 1.04 6301.71 24.62 0.00 0.00 20293.88 685.15 34317.03 00:13:45.150 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 Malloc1p0 : 1.04 6295.28 24.59 0.00 0.00 20288.54 755.90 33602.09 00:13:45.150 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 Malloc1p1 : 1.04 6288.78 24.57 0.00 0.00 20273.33 744.73 32887.16 00:13:45.150 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 Malloc2p0 : 1.04 6282.50 24.54 0.00 0.00 20261.10 770.79 32172.22 00:13:45.150 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 Malloc2p1 : 1.04 6276.24 24.52 0.00 0.00 20251.27 744.73 31457.28 00:13:45.150 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 Malloc2p2 : 1.04 6269.90 24.49 0.00 0.00 20235.67 767.07 30504.03 00:13:45.150 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 Malloc2p3 : 1.04 6263.68 24.47 0.00 0.00 20222.07 748.45 29789.09 00:13:45.150 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 Malloc2p4 : 1.04 6257.46 24.44 0.00 0.00 20208.33 748.45 29074.15 00:13:45.150 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 Malloc2p5 : 1.04 6251.23 24.42 0.00 0.00 20193.60 748.45 28240.06 00:13:45.150 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 Malloc2p6 : 1.05 6244.95 24.39 0.00 0.00 20178.85 741.00 27525.12 00:13:45.150 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 Malloc2p7 : 1.05 6238.78 24.37 0.00 0.00 20167.16 752.17 26810.18 00:13:45.150 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 TestPT : 1.05 6232.60 24.35 0.00 0.00 20151.46 770.79 25976.09 00:13:45.150 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 raid0 : 1.05 6225.47 24.32 0.00 0.00 20131.60 1303.27 24665.37 00:13:45.150 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 concat0 : 1.05 6218.36 24.29 0.00 0.00 20101.38 1295.83 23354.65 00:13:45.150 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 raid1 : 1.05 6209.58 24.26 0.00 0.00 20058.73 1980.97 21328.99 00:13:45.150 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:45.150 AIO0 : 1.05 6198.51 24.21 0.00 0.00 20005.90 1601.16 21448.15 00:13:45.150 =================================================================================================================== 00:13:45.150 Total : 100055.04 390.84 0.00 0.00 20188.94 685.15 34317.03 00:13:46.523 00:13:46.523 real 0m3.965s 00:13:46.523 user 0m3.370s 00:13:46.523 sys 0m0.413s 00:13:46.523 ************************************ 00:13:46.523 END TEST bdev_write_zeroes 00:13:46.523 ************************************ 00:13:46.523 16:29:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.523 16:29:23 -- common/autotest_common.sh@10 -- # set +x 00:13:46.523 16:29:23 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.523 16:29:23 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:46.523 16:29:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:46.523 16:29:23 -- common/autotest_common.sh@10 -- # set +x 00:13:46.781 ************************************ 00:13:46.781 START TEST bdev_json_nonenclosed 00:13:46.781 ************************************ 00:13:46.781 16:29:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:46.781 [2024-07-11 16:29:23.401393] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:46.781 [2024-07-11 16:29:23.401748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113316 ] 00:13:46.781 [2024-07-11 16:29:23.568345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.039 [2024-07-11 16:29:23.725281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.039 [2024-07-11 16:29:23.725512] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:47.039 [2024-07-11 16:29:23.725554] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:47.297 00:13:47.297 real 0m0.694s 00:13:47.297 user 0m0.469s 00:13:47.297 sys 0m0.124s 00:13:47.297 16:29:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:47.297 16:29:24 -- common/autotest_common.sh@10 -- # set +x 00:13:47.297 ************************************ 00:13:47.297 END TEST bdev_json_nonenclosed 00:13:47.297 ************************************ 00:13:47.297 16:29:24 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:47.297 16:29:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:13:47.297 16:29:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:47.297 16:29:24 -- common/autotest_common.sh@10 -- # set +x 00:13:47.297 ************************************ 00:13:47.297 START TEST bdev_json_nonarray 00:13:47.297 ************************************ 00:13:47.297 16:29:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:47.555 [2024-07-11 16:29:24.152752] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:47.555 [2024-07-11 16:29:24.153181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113354 ] 00:13:47.555 [2024-07-11 16:29:24.318294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.813 [2024-07-11 16:29:24.474226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.813 [2024-07-11 16:29:24.474450] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:47.813 [2024-07-11 16:29:24.474490] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:48.071 00:13:48.071 real 0m0.698s 00:13:48.071 user 0m0.491s 00:13:48.071 sys 0m0.105s 00:13:48.071 16:29:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.071 ************************************ 00:13:48.071 16:29:24 -- common/autotest_common.sh@10 -- # set +x 00:13:48.071 END TEST bdev_json_nonarray 00:13:48.071 ************************************ 00:13:48.071 16:29:24 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:13:48.071 16:29:24 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:13:48.071 16:29:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:48.071 16:29:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:48.071 16:29:24 -- common/autotest_common.sh@10 -- # set +x 00:13:48.071 ************************************ 00:13:48.071 START TEST bdev_qos 00:13:48.071 ************************************ 00:13:48.071 16:29:24 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:13:48.071 16:29:24 -- bdev/blockdev.sh@444 -- # QOS_PID=113383 00:13:48.071 16:29:24 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 113383' 00:13:48.071 Process qos testing pid: 113383 00:13:48.071 16:29:24 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:48.071 16:29:24 -- bdev/blockdev.sh@447 -- # waitforlisten 113383 00:13:48.072 16:29:24 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:48.072 16:29:24 -- common/autotest_common.sh@819 -- # '[' -z 113383 ']' 00:13:48.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.072 16:29:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.072 16:29:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:48.072 16:29:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.072 16:29:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:48.072 16:29:24 -- common/autotest_common.sh@10 -- # set +x 00:13:48.330 [2024-07-11 16:29:24.908283] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:48.330 [2024-07-11 16:29:24.908477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113383 ] 00:13:48.330 [2024-07-11 16:29:25.074577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.589 [2024-07-11 16:29:25.273927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.155 16:29:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:49.155 16:29:25 -- common/autotest_common.sh@852 -- # return 0 00:13:49.155 16:29:25 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:49.155 16:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.155 16:29:25 -- common/autotest_common.sh@10 -- # set +x 00:13:49.155 Malloc_0 00:13:49.155 16:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.155 16:29:25 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:13:49.155 16:29:25 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:13:49.155 16:29:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:49.155 16:29:25 -- common/autotest_common.sh@889 -- # local i 00:13:49.155 16:29:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:49.155 16:29:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:49.155 16:29:25 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:49.155 16:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.155 16:29:25 -- common/autotest_common.sh@10 -- # set +x 00:13:49.155 16:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.155 16:29:25 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:49.155 16:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.155 16:29:25 -- common/autotest_common.sh@10 -- # set +x 00:13:49.155 [ 00:13:49.155 { 00:13:49.155 "name": "Malloc_0", 00:13:49.155 "aliases": [ 00:13:49.155 "16ea53c3-95c1-4e75-a976-d182a987b0ce" 00:13:49.155 ], 00:13:49.155 "product_name": "Malloc disk", 00:13:49.155 "block_size": 512, 00:13:49.155 "num_blocks": 262144, 00:13:49.155 "uuid": "16ea53c3-95c1-4e75-a976-d182a987b0ce", 00:13:49.155 "assigned_rate_limits": { 00:13:49.155 "rw_ios_per_sec": 0, 00:13:49.155 "rw_mbytes_per_sec": 0, 00:13:49.155 "r_mbytes_per_sec": 0, 00:13:49.155 "w_mbytes_per_sec": 0 00:13:49.155 }, 00:13:49.155 "claimed": false, 00:13:49.155 "zoned": false, 00:13:49.155 "supported_io_types": { 00:13:49.155 "read": true, 00:13:49.155 "write": true, 00:13:49.155 "unmap": true, 00:13:49.155 "write_zeroes": true, 00:13:49.155 "flush": true, 00:13:49.155 "reset": true, 00:13:49.155 "compare": false, 00:13:49.155 "compare_and_write": false, 00:13:49.155 "abort": true, 00:13:49.155 "nvme_admin": false, 00:13:49.155 "nvme_io": false 00:13:49.155 }, 00:13:49.155 "memory_domains": [ 00:13:49.155 { 00:13:49.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.155 "dma_device_type": 2 00:13:49.155 } 00:13:49.155 ], 00:13:49.155 "driver_specific": {} 00:13:49.155 } 00:13:49.155 ] 00:13:49.155 16:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.155 16:29:25 -- common/autotest_common.sh@895 -- # return 0 00:13:49.155 16:29:25 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:49.155 16:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.155 16:29:25 -- common/autotest_common.sh@10 -- # set +x 00:13:49.155 Null_1 00:13:49.155 16:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.155 16:29:25 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:13:49.155 16:29:25 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:13:49.155 16:29:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:49.155 16:29:25 -- common/autotest_common.sh@889 -- # local i 00:13:49.155 16:29:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:49.155 16:29:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:49.155 16:29:25 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:13:49.155 16:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.155 16:29:25 -- common/autotest_common.sh@10 -- # set +x 00:13:49.155 16:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.155 16:29:25 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:49.155 16:29:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.155 16:29:25 -- common/autotest_common.sh@10 -- # set +x 00:13:49.155 [ 00:13:49.155 { 00:13:49.155 "name": "Null_1", 00:13:49.155 "aliases": [ 00:13:49.155 "f8ab7605-4c8f-4b05-8727-3a504f4512f7" 00:13:49.155 ], 00:13:49.155 "product_name": "Null disk", 00:13:49.155 "block_size": 512, 00:13:49.155 "num_blocks": 262144, 00:13:49.155 "uuid": "f8ab7605-4c8f-4b05-8727-3a504f4512f7", 00:13:49.155 "assigned_rate_limits": { 00:13:49.155 "rw_ios_per_sec": 0, 00:13:49.155 "rw_mbytes_per_sec": 0, 00:13:49.155 "r_mbytes_per_sec": 0, 00:13:49.155 "w_mbytes_per_sec": 0 00:13:49.155 }, 00:13:49.155 "claimed": false, 00:13:49.155 "zoned": false, 00:13:49.155 "supported_io_types": { 00:13:49.155 "read": true, 00:13:49.156 "write": true, 00:13:49.156 "unmap": false, 00:13:49.156 "write_zeroes": true, 00:13:49.156 "flush": false, 00:13:49.156 "reset": true, 00:13:49.156 "compare": false, 00:13:49.156 "compare_and_write": false, 00:13:49.156 "abort": true, 00:13:49.156 "nvme_admin": false, 00:13:49.156 "nvme_io": false 00:13:49.156 }, 00:13:49.156 "driver_specific": {} 00:13:49.156 } 00:13:49.156 ] 00:13:49.156 16:29:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.156 16:29:25 -- common/autotest_common.sh@895 -- # return 0 00:13:49.156 16:29:25 -- bdev/blockdev.sh@455 -- # qos_function_test 00:13:49.156 16:29:25 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:49.156 16:29:25 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:13:49.156 16:29:25 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:13:49.156 16:29:25 -- bdev/blockdev.sh@410 -- # local io_result=0 00:13:49.156 16:29:25 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:13:49.156 16:29:25 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:13:49.156 16:29:25 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:13:49.156 16:29:25 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:49.156 16:29:25 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:49.156 16:29:25 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:49.156 16:29:25 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:49.156 16:29:25 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:49.156 16:29:25 -- bdev/blockdev.sh@376 -- # tail -1 00:13:49.414 Running I/O for 60 seconds... 00:13:54.728 16:29:31 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 84414.55 337658.21 0.00 0.00 342016.00 0.00 0.00 ' 00:13:54.728 16:29:31 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:54.728 16:29:31 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:54.728 16:29:31 -- bdev/blockdev.sh@378 -- # iostat_result=84414.55 00:13:54.728 16:29:31 -- bdev/blockdev.sh@383 -- # echo 84414 00:13:54.728 16:29:31 -- bdev/blockdev.sh@414 -- # io_result=84414 00:13:54.728 16:29:31 -- bdev/blockdev.sh@416 -- # iops_limit=21000 00:13:54.728 16:29:31 -- bdev/blockdev.sh@417 -- # '[' 21000 -gt 1000 ']' 00:13:54.728 16:29:31 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 21000 Malloc_0 00:13:54.728 16:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.728 16:29:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.728 16:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.728 16:29:31 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 21000 IOPS Malloc_0 00:13:54.728 16:29:31 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:54.728 16:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:54.728 16:29:31 -- common/autotest_common.sh@10 -- # set +x 00:13:54.728 ************************************ 00:13:54.728 START TEST bdev_qos_iops 00:13:54.728 ************************************ 00:13:54.728 16:29:31 -- common/autotest_common.sh@1104 -- # run_qos_test 21000 IOPS Malloc_0 00:13:54.728 16:29:31 -- bdev/blockdev.sh@387 -- # local qos_limit=21000 00:13:54.728 16:29:31 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:54.728 16:29:31 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:13:54.728 16:29:31 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:54.728 16:29:31 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:54.728 16:29:31 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:54.728 16:29:31 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:54.728 16:29:31 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:54.728 16:29:31 -- bdev/blockdev.sh@376 -- # tail -1 00:13:59.994 16:29:36 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 20998.59 83994.37 0.00 0.00 85008.00 0.00 0.00 ' 00:13:59.994 16:29:36 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:59.994 16:29:36 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:59.994 16:29:36 -- bdev/blockdev.sh@378 -- # iostat_result=20998.59 00:13:59.994 16:29:36 -- bdev/blockdev.sh@383 -- # echo 20998 00:13:59.994 16:29:36 -- bdev/blockdev.sh@390 -- # qos_result=20998 00:13:59.994 16:29:36 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:13:59.994 16:29:36 -- bdev/blockdev.sh@394 -- # lower_limit=18900 00:13:59.994 16:29:36 -- bdev/blockdev.sh@395 -- # upper_limit=23100 00:13:59.994 16:29:36 -- bdev/blockdev.sh@398 -- # '[' 20998 -lt 18900 ']' 00:13:59.994 16:29:36 -- bdev/blockdev.sh@398 -- # '[' 20998 -gt 23100 ']' 00:13:59.994 00:13:59.994 real 0m5.189s 00:13:59.994 user 0m0.091s 00:13:59.994 sys 0m0.037s 00:13:59.994 16:29:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.994 ************************************ 00:13:59.994 END TEST bdev_qos_iops 00:13:59.994 ************************************ 00:13:59.994 16:29:36 -- common/autotest_common.sh@10 -- # set +x 00:13:59.994 16:29:36 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:13:59.994 16:29:36 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:59.994 16:29:36 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:59.994 16:29:36 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:59.994 16:29:36 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:59.994 16:29:36 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:59.994 16:29:36 -- bdev/blockdev.sh@376 -- # tail -1 00:14:05.261 16:29:41 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 30195.88 120783.53 0.00 0.00 122880.00 0.00 0.00 ' 00:14:05.261 16:29:41 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:05.261 16:29:41 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:05.261 16:29:41 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:05.261 16:29:41 -- bdev/blockdev.sh@380 -- # iostat_result=122880.00 00:14:05.261 16:29:41 -- bdev/blockdev.sh@383 -- # echo 122880 00:14:05.261 16:29:41 -- bdev/blockdev.sh@425 -- # bw_limit=122880 00:14:05.261 16:29:41 -- bdev/blockdev.sh@426 -- # bw_limit=12 00:14:05.261 16:29:41 -- bdev/blockdev.sh@427 -- # '[' 12 -lt 2 ']' 00:14:05.261 16:29:41 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:14:05.261 16:29:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:05.261 16:29:41 -- common/autotest_common.sh@10 -- # set +x 00:14:05.261 16:29:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:05.261 16:29:41 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:14:05.261 16:29:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:05.261 16:29:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:05.261 16:29:41 -- common/autotest_common.sh@10 -- # set +x 00:14:05.261 ************************************ 00:14:05.261 START TEST bdev_qos_bw 00:14:05.261 ************************************ 00:14:05.261 16:29:41 -- common/autotest_common.sh@1104 -- # run_qos_test 12 BANDWIDTH Null_1 00:14:05.261 16:29:41 -- bdev/blockdev.sh@387 -- # local qos_limit=12 00:14:05.261 16:29:41 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:05.261 16:29:41 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:14:05.261 16:29:41 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:05.261 16:29:41 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:14:05.261 16:29:41 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:05.261 16:29:41 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:05.261 16:29:41 -- bdev/blockdev.sh@376 -- # grep Null_1 00:14:05.261 16:29:41 -- bdev/blockdev.sh@376 -- # tail -1 00:14:10.536 16:29:46 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 3072.64 12290.58 0.00 0.00 12472.00 0.00 0.00 ' 00:14:10.536 16:29:46 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:10.536 16:29:46 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:10.536 16:29:46 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:10.536 16:29:46 -- bdev/blockdev.sh@380 -- # iostat_result=12472.00 00:14:10.536 16:29:46 -- bdev/blockdev.sh@383 -- # echo 12472 00:14:10.536 16:29:46 -- bdev/blockdev.sh@390 -- # qos_result=12472 00:14:10.536 16:29:46 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:10.536 16:29:46 -- bdev/blockdev.sh@392 -- # qos_limit=12288 00:14:10.536 16:29:46 -- bdev/blockdev.sh@394 -- # lower_limit=11059 00:14:10.536 16:29:46 -- bdev/blockdev.sh@395 -- # upper_limit=13516 00:14:10.536 16:29:46 -- bdev/blockdev.sh@398 -- # '[' 12472 -lt 11059 ']' 00:14:10.536 16:29:46 -- bdev/blockdev.sh@398 -- # '[' 12472 -gt 13516 ']' 00:14:10.536 ************************************ 00:14:10.536 END TEST bdev_qos_bw 00:14:10.536 ************************************ 00:14:10.536 00:14:10.536 real 0m5.212s 00:14:10.536 user 0m0.115s 00:14:10.536 sys 0m0.014s 00:14:10.536 16:29:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.536 16:29:46 -- common/autotest_common.sh@10 -- # set +x 00:14:10.536 16:29:46 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:10.536 16:29:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:10.536 16:29:46 -- common/autotest_common.sh@10 -- # set +x 00:14:10.536 16:29:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:10.536 16:29:46 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:10.536 16:29:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:10.536 16:29:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:10.536 16:29:46 -- common/autotest_common.sh@10 -- # set +x 00:14:10.536 ************************************ 00:14:10.536 START TEST bdev_qos_ro_bw 00:14:10.536 ************************************ 00:14:10.536 16:29:46 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:10.536 16:29:46 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:14:10.536 16:29:46 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:14:10.536 16:29:46 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:14:10.536 16:29:46 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:14:10.536 16:29:46 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:14:10.536 16:29:46 -- bdev/blockdev.sh@375 -- # local iostat_result 00:14:10.536 16:29:46 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:10.536 16:29:46 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:14:10.536 16:29:46 -- bdev/blockdev.sh@376 -- # tail -1 00:14:15.825 16:29:52 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 511.87 2047.47 0.00 0.00 2060.00 0.00 0.00 ' 00:14:15.825 16:29:52 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:14:15.825 16:29:52 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:15.825 16:29:52 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:14:15.825 16:29:52 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:14:15.825 16:29:52 -- bdev/blockdev.sh@383 -- # echo 2060 00:14:15.825 16:29:52 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:14:15.825 16:29:52 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:15.825 16:29:52 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:14:15.825 16:29:52 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:14:15.825 16:29:52 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:14:15.825 16:29:52 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:14:15.825 16:29:52 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:14:15.825 00:14:15.825 real 0m5.159s 00:14:15.825 user 0m0.095s 00:14:15.825 sys 0m0.033s 00:14:15.825 16:29:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:15.825 16:29:52 -- common/autotest_common.sh@10 -- # set +x 00:14:15.825 ************************************ 00:14:15.825 END TEST bdev_qos_ro_bw 00:14:15.825 ************************************ 00:14:15.825 16:29:52 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:15.825 16:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.825 16:29:52 -- common/autotest_common.sh@10 -- # set +x 00:14:16.084 16:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:16.084 16:29:52 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:14:16.084 16:29:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:16.084 16:29:52 -- common/autotest_common.sh@10 -- # set +x 00:14:16.084 00:14:16.084 Latency(us) 00:14:16.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.084 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:16.084 Malloc_0 : 26.60 28740.52 112.27 0.00 0.00 8825.24 1779.90 503316.48 00:14:16.084 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:16.084 Null_1 : 26.78 29027.65 113.39 0.00 0.00 8801.03 577.16 174444.92 00:14:16.084 =================================================================================================================== 00:14:16.084 Total : 57768.18 225.66 0.00 0.00 8813.04 577.16 503316.48 00:14:16.084 0 00:14:16.084 16:29:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:16.084 16:29:52 -- bdev/blockdev.sh@459 -- # killprocess 113383 00:14:16.084 16:29:52 -- common/autotest_common.sh@926 -- # '[' -z 113383 ']' 00:14:16.084 16:29:52 -- common/autotest_common.sh@930 -- # kill -0 113383 00:14:16.084 16:29:52 -- common/autotest_common.sh@931 -- # uname 00:14:16.084 16:29:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:16.084 16:29:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113383 00:14:16.084 16:29:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:16.084 16:29:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:16.084 killing process with pid 113383 00:14:16.084 Received shutdown signal, test time was about 26.813678 seconds 00:14:16.084 00:14:16.084 Latency(us) 00:14:16.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.084 =================================================================================================================== 00:14:16.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.084 16:29:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113383' 00:14:16.084 16:29:52 -- common/autotest_common.sh@945 -- # kill 113383 00:14:16.084 16:29:52 -- common/autotest_common.sh@950 -- # wait 113383 00:14:17.462 16:29:53 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:14:17.462 ************************************ 00:14:17.462 END TEST bdev_qos 00:14:17.462 00:14:17.462 real 0m29.074s 00:14:17.462 user 0m29.681s 00:14:17.462 sys 0m0.599s 00:14:17.462 16:29:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.462 16:29:53 -- common/autotest_common.sh@10 -- # set +x 00:14:17.462 ************************************ 00:14:17.462 16:29:53 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:17.462 16:29:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:17.462 16:29:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:17.462 16:29:53 -- common/autotest_common.sh@10 -- # set +x 00:14:17.462 ************************************ 00:14:17.462 START TEST bdev_qd_sampling 00:14:17.462 ************************************ 00:14:17.462 16:29:53 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:14:17.462 16:29:53 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:14:17.462 16:29:53 -- bdev/blockdev.sh@539 -- # QD_PID=113903 00:14:17.462 Process bdev QD sampling period testing pid: 113903 00:14:17.462 16:29:53 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 113903' 00:14:17.462 16:29:53 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:17.462 16:29:53 -- bdev/blockdev.sh@542 -- # waitforlisten 113903 00:14:17.462 16:29:53 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:17.462 16:29:53 -- common/autotest_common.sh@819 -- # '[' -z 113903 ']' 00:14:17.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.462 16:29:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.462 16:29:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:17.462 16:29:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.462 16:29:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:17.462 16:29:53 -- common/autotest_common.sh@10 -- # set +x 00:14:17.462 [2024-07-11 16:29:54.039909] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:17.462 [2024-07-11 16:29:54.040148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113903 ] 00:14:17.462 [2024-07-11 16:29:54.218664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:17.721 [2024-07-11 16:29:54.436833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.721 [2024-07-11 16:29:54.436842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.287 16:29:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:18.287 16:29:54 -- common/autotest_common.sh@852 -- # return 0 00:14:18.287 16:29:54 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:18.287 16:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:18.287 16:29:54 -- common/autotest_common.sh@10 -- # set +x 00:14:18.287 Malloc_QD 00:14:18.287 16:29:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:18.287 16:29:55 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:14:18.287 16:29:55 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:14:18.287 16:29:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:18.287 16:29:55 -- common/autotest_common.sh@889 -- # local i 00:14:18.287 16:29:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:18.287 16:29:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:18.287 16:29:55 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:18.287 16:29:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:18.287 16:29:55 -- common/autotest_common.sh@10 -- # set +x 00:14:18.287 16:29:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:18.287 16:29:55 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:18.287 16:29:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:18.287 16:29:55 -- common/autotest_common.sh@10 -- # set +x 00:14:18.287 [ 00:14:18.287 { 00:14:18.287 "name": "Malloc_QD", 00:14:18.287 "aliases": [ 00:14:18.287 "13834c89-3b1c-4bb2-b8fb-5e50bf4a19c7" 00:14:18.287 ], 00:14:18.287 "product_name": "Malloc disk", 00:14:18.287 "block_size": 512, 00:14:18.287 "num_blocks": 262144, 00:14:18.287 "uuid": "13834c89-3b1c-4bb2-b8fb-5e50bf4a19c7", 00:14:18.287 "assigned_rate_limits": { 00:14:18.287 "rw_ios_per_sec": 0, 00:14:18.287 "rw_mbytes_per_sec": 0, 00:14:18.287 "r_mbytes_per_sec": 0, 00:14:18.287 "w_mbytes_per_sec": 0 00:14:18.287 }, 00:14:18.287 "claimed": false, 00:14:18.287 "zoned": false, 00:14:18.287 "supported_io_types": { 00:14:18.287 "read": true, 00:14:18.287 "write": true, 00:14:18.287 "unmap": true, 00:14:18.287 "write_zeroes": true, 00:14:18.287 "flush": true, 00:14:18.287 "reset": true, 00:14:18.287 "compare": false, 00:14:18.287 "compare_and_write": false, 00:14:18.287 "abort": true, 00:14:18.287 "nvme_admin": false, 00:14:18.287 "nvme_io": false 00:14:18.287 }, 00:14:18.287 "memory_domains": [ 00:14:18.287 { 00:14:18.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.287 "dma_device_type": 2 00:14:18.287 } 00:14:18.287 ], 00:14:18.287 "driver_specific": {} 00:14:18.287 } 00:14:18.287 ] 00:14:18.287 16:29:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:18.287 16:29:55 -- common/autotest_common.sh@895 -- # return 0 00:14:18.287 16:29:55 -- bdev/blockdev.sh@548 -- # sleep 2 00:14:18.287 16:29:55 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:18.546 Running I/O for 5 seconds... 00:14:20.447 16:29:57 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:14:20.447 16:29:57 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:14:20.447 16:29:57 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:14:20.447 16:29:57 -- bdev/blockdev.sh@519 -- # local iostats 00:14:20.447 16:29:57 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:20.447 16:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.447 16:29:57 -- common/autotest_common.sh@10 -- # set +x 00:14:20.447 16:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.447 16:29:57 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:20.447 16:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.447 16:29:57 -- common/autotest_common.sh@10 -- # set +x 00:14:20.447 16:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.447 16:29:57 -- bdev/blockdev.sh@523 -- # iostats='{ 00:14:20.447 "tick_rate": 2200000000, 00:14:20.447 "ticks": 1721000661732, 00:14:20.447 "bdevs": [ 00:14:20.447 { 00:14:20.447 "name": "Malloc_QD", 00:14:20.447 "bytes_read": 1019253248, 00:14:20.447 "num_read_ops": 248835, 00:14:20.447 "bytes_written": 0, 00:14:20.447 "num_write_ops": 0, 00:14:20.447 "bytes_unmapped": 0, 00:14:20.447 "num_unmap_ops": 0, 00:14:20.447 "bytes_copied": 0, 00:14:20.447 "num_copy_ops": 0, 00:14:20.447 "read_latency_ticks": 2179399866266, 00:14:20.447 "max_read_latency_ticks": 12872486, 00:14:20.447 "min_read_latency_ticks": 337122, 00:14:20.447 "write_latency_ticks": 0, 00:14:20.447 "max_write_latency_ticks": 0, 00:14:20.447 "min_write_latency_ticks": 0, 00:14:20.447 "unmap_latency_ticks": 0, 00:14:20.447 "max_unmap_latency_ticks": 0, 00:14:20.447 "min_unmap_latency_ticks": 0, 00:14:20.447 "copy_latency_ticks": 0, 00:14:20.447 "max_copy_latency_ticks": 0, 00:14:20.447 "min_copy_latency_ticks": 0, 00:14:20.447 "io_error": {}, 00:14:20.447 "queue_depth_polling_period": 10, 00:14:20.447 "queue_depth": 512, 00:14:20.447 "io_time": 20, 00:14:20.447 "weighted_io_time": 10240 00:14:20.447 } 00:14:20.447 ] 00:14:20.447 }' 00:14:20.447 16:29:57 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:20.447 16:29:57 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:14:20.447 16:29:57 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:14:20.447 16:29:57 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:14:20.447 16:29:57 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:20.447 16:29:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.447 16:29:57 -- common/autotest_common.sh@10 -- # set +x 00:14:20.447 00:14:20.447 Latency(us) 00:14:20.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.447 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:20.447 Malloc_QD : 2.02 64150.88 250.59 0.00 0.00 3981.49 953.25 5868.45 00:14:20.447 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:20.447 Malloc_QD : 2.02 63948.83 249.80 0.00 0.00 3994.02 875.05 5183.30 00:14:20.447 =================================================================================================================== 00:14:20.447 Total : 128099.72 500.39 0.00 0.00 3987.75 875.05 5868.45 00:14:20.447 0 00:14:20.447 16:29:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.447 16:29:57 -- bdev/blockdev.sh@552 -- # killprocess 113903 00:14:20.447 16:29:57 -- common/autotest_common.sh@926 -- # '[' -z 113903 ']' 00:14:20.447 16:29:57 -- common/autotest_common.sh@930 -- # kill -0 113903 00:14:20.447 16:29:57 -- common/autotest_common.sh@931 -- # uname 00:14:20.447 16:29:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:20.447 16:29:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113903 00:14:20.706 16:29:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:20.706 16:29:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:20.706 killing process with pid 113903 00:14:20.706 16:29:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113903' 00:14:20.706 16:29:57 -- common/autotest_common.sh@945 -- # kill 113903 00:14:20.706 16:29:57 -- common/autotest_common.sh@950 -- # wait 113903 00:14:20.706 Received shutdown signal, test time was about 2.135906 seconds 00:14:20.707 00:14:20.707 Latency(us) 00:14:20.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.707 =================================================================================================================== 00:14:20.707 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:21.642 16:29:58 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:14:21.642 00:14:21.642 real 0m4.362s 00:14:21.642 user 0m8.031s 00:14:21.642 sys 0m0.388s 00:14:21.642 16:29:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.642 ************************************ 00:14:21.642 END TEST bdev_qd_sampling 00:14:21.642 ************************************ 00:14:21.642 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:14:21.642 16:29:58 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:14:21.642 16:29:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:21.642 16:29:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:21.642 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:14:21.642 ************************************ 00:14:21.642 START TEST bdev_error 00:14:21.642 ************************************ 00:14:21.642 16:29:58 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:14:21.642 16:29:58 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:14:21.642 16:29:58 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:14:21.642 16:29:58 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:14:21.642 16:29:58 -- bdev/blockdev.sh@470 -- # ERR_PID=113990 00:14:21.642 Process error testing pid: 113990 00:14:21.642 16:29:58 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 113990' 00:14:21.642 16:29:58 -- bdev/blockdev.sh@472 -- # waitforlisten 113990 00:14:21.642 16:29:58 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:21.642 16:29:58 -- common/autotest_common.sh@819 -- # '[' -z 113990 ']' 00:14:21.642 16:29:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.642 16:29:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.642 16:29:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.642 16:29:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.642 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:14:21.901 [2024-07-11 16:29:58.453256] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:21.901 [2024-07-11 16:29:58.453517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113990 ] 00:14:21.901 [2024-07-11 16:29:58.617172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.159 [2024-07-11 16:29:58.788848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.726 16:29:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:22.726 16:29:59 -- common/autotest_common.sh@852 -- # return 0 00:14:22.726 16:29:59 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:22.726 16:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.726 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:14:22.726 Dev_1 00:14:22.726 16:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.726 16:29:59 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:14:22.726 16:29:59 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:22.726 16:29:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:22.726 16:29:59 -- common/autotest_common.sh@889 -- # local i 00:14:22.726 16:29:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:22.726 16:29:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:22.726 16:29:59 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:22.726 16:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.726 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:14:22.726 16:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.726 16:29:59 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:22.726 16:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.726 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:14:22.726 [ 00:14:22.726 { 00:14:22.726 "name": "Dev_1", 00:14:22.726 "aliases": [ 00:14:22.726 "b46bf4b6-1cd9-4859-88db-02d2d7e3733a" 00:14:22.726 ], 00:14:22.726 "product_name": "Malloc disk", 00:14:22.726 "block_size": 512, 00:14:22.726 "num_blocks": 262144, 00:14:22.726 "uuid": "b46bf4b6-1cd9-4859-88db-02d2d7e3733a", 00:14:22.726 "assigned_rate_limits": { 00:14:22.726 "rw_ios_per_sec": 0, 00:14:22.726 "rw_mbytes_per_sec": 0, 00:14:22.726 "r_mbytes_per_sec": 0, 00:14:22.726 "w_mbytes_per_sec": 0 00:14:22.726 }, 00:14:22.726 "claimed": false, 00:14:22.726 "zoned": false, 00:14:22.726 "supported_io_types": { 00:14:22.726 "read": true, 00:14:22.726 "write": true, 00:14:22.726 "unmap": true, 00:14:22.726 "write_zeroes": true, 00:14:22.726 "flush": true, 00:14:22.726 "reset": true, 00:14:22.726 "compare": false, 00:14:22.726 "compare_and_write": false, 00:14:22.726 "abort": true, 00:14:22.726 "nvme_admin": false, 00:14:22.726 "nvme_io": false 00:14:22.726 }, 00:14:22.726 "memory_domains": [ 00:14:22.726 { 00:14:22.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.726 "dma_device_type": 2 00:14:22.726 } 00:14:22.726 ], 00:14:22.726 "driver_specific": {} 00:14:22.726 } 00:14:22.726 ] 00:14:22.726 16:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.726 16:29:59 -- common/autotest_common.sh@895 -- # return 0 00:14:22.726 16:29:59 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:14:22.726 16:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.726 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:14:22.726 true 00:14:22.726 16:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.726 16:29:59 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:22.726 16:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.726 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:14:22.985 Dev_2 00:14:22.985 16:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.985 16:29:59 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:14:22.985 16:29:59 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:22.985 16:29:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:22.985 16:29:59 -- common/autotest_common.sh@889 -- # local i 00:14:22.985 16:29:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:22.985 16:29:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:22.985 16:29:59 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:22.985 16:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.985 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:14:22.985 16:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.985 16:29:59 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:22.985 16:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.985 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:14:22.985 [ 00:14:22.985 { 00:14:22.985 "name": "Dev_2", 00:14:22.985 "aliases": [ 00:14:22.985 "c730a7cd-95b8-4ec6-9733-f5b6336d2e58" 00:14:22.985 ], 00:14:22.985 "product_name": "Malloc disk", 00:14:22.985 "block_size": 512, 00:14:22.985 "num_blocks": 262144, 00:14:22.985 "uuid": "c730a7cd-95b8-4ec6-9733-f5b6336d2e58", 00:14:22.985 "assigned_rate_limits": { 00:14:22.985 "rw_ios_per_sec": 0, 00:14:22.985 "rw_mbytes_per_sec": 0, 00:14:22.985 "r_mbytes_per_sec": 0, 00:14:22.985 "w_mbytes_per_sec": 0 00:14:22.985 }, 00:14:22.985 "claimed": false, 00:14:22.985 "zoned": false, 00:14:22.985 "supported_io_types": { 00:14:22.985 "read": true, 00:14:22.985 "write": true, 00:14:22.985 "unmap": true, 00:14:22.985 "write_zeroes": true, 00:14:22.985 "flush": true, 00:14:22.985 "reset": true, 00:14:22.985 "compare": false, 00:14:22.985 "compare_and_write": false, 00:14:22.985 "abort": true, 00:14:22.985 "nvme_admin": false, 00:14:22.985 "nvme_io": false 00:14:22.985 }, 00:14:22.985 "memory_domains": [ 00:14:22.985 { 00:14:22.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.985 "dma_device_type": 2 00:14:22.985 } 00:14:22.985 ], 00:14:22.985 "driver_specific": {} 00:14:22.985 } 00:14:22.985 ] 00:14:22.985 16:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.985 16:29:59 -- common/autotest_common.sh@895 -- # return 0 00:14:22.985 16:29:59 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:22.985 16:29:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.985 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:14:22.985 16:29:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.985 16:29:59 -- bdev/blockdev.sh@482 -- # sleep 1 00:14:22.985 16:29:59 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:22.985 Running I/O for 5 seconds... 00:14:23.922 16:30:00 -- bdev/blockdev.sh@485 -- # kill -0 113990 00:14:23.922 16:30:00 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 113990' 00:14:23.922 Process is existed as continue on error is set. Pid: 113990 00:14:23.922 16:30:00 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:23.922 16:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.922 16:30:00 -- common/autotest_common.sh@10 -- # set +x 00:14:23.922 16:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.922 16:30:00 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:23.922 16:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.922 16:30:00 -- common/autotest_common.sh@10 -- # set +x 00:14:23.922 Timeout while waiting for response: 00:14:23.922 00:14:23.922 00:14:24.181 16:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:24.181 16:30:00 -- bdev/blockdev.sh@495 -- # sleep 5 00:14:28.367 00:14:28.367 Latency(us) 00:14:28.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:28.367 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:28.367 EE_Dev_1 : 0.93 48014.03 187.55 5.36 0.00 330.79 114.04 673.98 00:14:28.367 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:28.367 Dev_2 : 5.00 98817.73 386.01 0.00 0.00 159.52 51.90 253564.74 00:14:28.367 =================================================================================================================== 00:14:28.367 Total : 146831.76 573.56 5.36 0.00 173.76 51.90 253564.74 00:14:29.300 16:30:05 -- bdev/blockdev.sh@497 -- # killprocess 113990 00:14:29.300 16:30:05 -- common/autotest_common.sh@926 -- # '[' -z 113990 ']' 00:14:29.300 16:30:05 -- common/autotest_common.sh@930 -- # kill -0 113990 00:14:29.300 16:30:05 -- common/autotest_common.sh@931 -- # uname 00:14:29.300 16:30:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:29.300 16:30:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 113990 00:14:29.300 16:30:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:29.300 killing process with pid 113990 00:14:29.300 16:30:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:29.301 16:30:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 113990' 00:14:29.301 16:30:05 -- common/autotest_common.sh@945 -- # kill 113990 00:14:29.301 Received shutdown signal, test time was about 5.000000 seconds 00:14:29.301 00:14:29.301 Latency(us) 00:14:29.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.301 =================================================================================================================== 00:14:29.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.301 16:30:05 -- common/autotest_common.sh@950 -- # wait 113990 00:14:30.675 16:30:07 -- bdev/blockdev.sh@501 -- # ERR_PID=114130 00:14:30.675 16:30:07 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:30.675 16:30:07 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 114130' 00:14:30.675 Process error testing pid: 114130 00:14:30.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.675 16:30:07 -- bdev/blockdev.sh@503 -- # waitforlisten 114130 00:14:30.675 16:30:07 -- common/autotest_common.sh@819 -- # '[' -z 114130 ']' 00:14:30.675 16:30:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.675 16:30:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:30.675 16:30:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.675 16:30:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:30.675 16:30:07 -- common/autotest_common.sh@10 -- # set +x 00:14:30.675 [2024-07-11 16:30:07.098097] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:30.675 [2024-07-11 16:30:07.098275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114130 ] 00:14:30.675 [2024-07-11 16:30:07.244574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.675 [2024-07-11 16:30:07.406203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.241 16:30:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:31.241 16:30:07 -- common/autotest_common.sh@852 -- # return 0 00:14:31.241 16:30:07 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:31.241 16:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.241 16:30:07 -- common/autotest_common.sh@10 -- # set +x 00:14:31.500 Dev_1 00:14:31.500 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.500 16:30:08 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:14:31.500 16:30:08 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:14:31.500 16:30:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:31.500 16:30:08 -- common/autotest_common.sh@889 -- # local i 00:14:31.500 16:30:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:31.500 16:30:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:31.500 16:30:08 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:31.500 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.500 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:14:31.500 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.500 16:30:08 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:31.500 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.500 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:14:31.500 [ 00:14:31.500 { 00:14:31.500 "name": "Dev_1", 00:14:31.500 "aliases": [ 00:14:31.500 "c19021a2-70fe-43e4-88a1-05fa86ee1e73" 00:14:31.500 ], 00:14:31.500 "product_name": "Malloc disk", 00:14:31.500 "block_size": 512, 00:14:31.500 "num_blocks": 262144, 00:14:31.500 "uuid": "c19021a2-70fe-43e4-88a1-05fa86ee1e73", 00:14:31.500 "assigned_rate_limits": { 00:14:31.500 "rw_ios_per_sec": 0, 00:14:31.500 "rw_mbytes_per_sec": 0, 00:14:31.500 "r_mbytes_per_sec": 0, 00:14:31.500 "w_mbytes_per_sec": 0 00:14:31.500 }, 00:14:31.500 "claimed": false, 00:14:31.500 "zoned": false, 00:14:31.500 "supported_io_types": { 00:14:31.500 "read": true, 00:14:31.500 "write": true, 00:14:31.500 "unmap": true, 00:14:31.500 "write_zeroes": true, 00:14:31.500 "flush": true, 00:14:31.500 "reset": true, 00:14:31.500 "compare": false, 00:14:31.500 "compare_and_write": false, 00:14:31.500 "abort": true, 00:14:31.500 "nvme_admin": false, 00:14:31.500 "nvme_io": false 00:14:31.500 }, 00:14:31.500 "memory_domains": [ 00:14:31.500 { 00:14:31.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.500 "dma_device_type": 2 00:14:31.500 } 00:14:31.500 ], 00:14:31.500 "driver_specific": {} 00:14:31.500 } 00:14:31.500 ] 00:14:31.500 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.500 16:30:08 -- common/autotest_common.sh@895 -- # return 0 00:14:31.500 16:30:08 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:14:31.500 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.500 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:14:31.500 true 00:14:31.500 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.500 16:30:08 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:31.500 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.500 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:14:31.500 Dev_2 00:14:31.500 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.500 16:30:08 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:14:31.500 16:30:08 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:14:31.500 16:30:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:31.500 16:30:08 -- common/autotest_common.sh@889 -- # local i 00:14:31.500 16:30:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:31.500 16:30:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:31.500 16:30:08 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:31.500 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.500 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:14:31.500 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.500 16:30:08 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:31.500 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.500 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:14:31.500 [ 00:14:31.500 { 00:14:31.500 "name": "Dev_2", 00:14:31.500 "aliases": [ 00:14:31.500 "e08a514f-5cac-4702-9a5d-a1da1f284a82" 00:14:31.500 ], 00:14:31.500 "product_name": "Malloc disk", 00:14:31.500 "block_size": 512, 00:14:31.500 "num_blocks": 262144, 00:14:31.500 "uuid": "e08a514f-5cac-4702-9a5d-a1da1f284a82", 00:14:31.500 "assigned_rate_limits": { 00:14:31.500 "rw_ios_per_sec": 0, 00:14:31.500 "rw_mbytes_per_sec": 0, 00:14:31.500 "r_mbytes_per_sec": 0, 00:14:31.500 "w_mbytes_per_sec": 0 00:14:31.500 }, 00:14:31.500 "claimed": false, 00:14:31.500 "zoned": false, 00:14:31.500 "supported_io_types": { 00:14:31.500 "read": true, 00:14:31.500 "write": true, 00:14:31.500 "unmap": true, 00:14:31.500 "write_zeroes": true, 00:14:31.500 "flush": true, 00:14:31.500 "reset": true, 00:14:31.500 "compare": false, 00:14:31.500 "compare_and_write": false, 00:14:31.500 "abort": true, 00:14:31.500 "nvme_admin": false, 00:14:31.500 "nvme_io": false 00:14:31.500 }, 00:14:31.500 "memory_domains": [ 00:14:31.500 { 00:14:31.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.500 "dma_device_type": 2 00:14:31.500 } 00:14:31.500 ], 00:14:31.500 "driver_specific": {} 00:14:31.500 } 00:14:31.500 ] 00:14:31.500 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.500 16:30:08 -- common/autotest_common.sh@895 -- # return 0 00:14:31.500 16:30:08 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:31.500 16:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.500 16:30:08 -- common/autotest_common.sh@10 -- # set +x 00:14:31.500 16:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.500 16:30:08 -- bdev/blockdev.sh@513 -- # NOT wait 114130 00:14:31.500 16:30:08 -- common/autotest_common.sh@640 -- # local es=0 00:14:31.500 16:30:08 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:31.500 16:30:08 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 114130 00:14:31.500 16:30:08 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:31.500 16:30:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:31.500 16:30:08 -- common/autotest_common.sh@632 -- # type -t wait 00:14:31.500 16:30:08 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:31.500 16:30:08 -- common/autotest_common.sh@643 -- # wait 114130 00:14:31.759 Running I/O for 5 seconds... 00:14:31.759 task offset: 56048 on job bdev=EE_Dev_1 fails 00:14:31.759 00:14:31.759 Latency(us) 00:14:31.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:31.759 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:31.759 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:31.759 EE_Dev_1 : 0.00 30812.32 120.36 7002.80 0.00 356.75 115.43 633.02 00:14:31.759 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:31.759 Dev_2 : 0.00 22099.45 86.33 0.00 0.00 507.60 114.04 934.63 00:14:31.759 =================================================================================================================== 00:14:31.759 Total : 52911.77 206.69 7002.80 0.00 438.57 114.04 934.63 00:14:31.759 [2024-07-11 16:30:08.343051] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:31.759 request: 00:14:31.759 { 00:14:31.759 "method": "perform_tests", 00:14:31.759 "req_id": 1 00:14:31.759 } 00:14:31.759 Got JSON-RPC error response 00:14:31.759 response: 00:14:31.759 { 00:14:31.759 "code": -32603, 00:14:31.759 "message": "bdevperf failed with error Operation not permitted" 00:14:31.759 } 00:14:33.133 16:30:09 -- common/autotest_common.sh@643 -- # es=255 00:14:33.133 16:30:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:33.133 16:30:09 -- common/autotest_common.sh@652 -- # es=127 00:14:33.133 16:30:09 -- common/autotest_common.sh@653 -- # case "$es" in 00:14:33.133 16:30:09 -- common/autotest_common.sh@660 -- # es=1 00:14:33.133 16:30:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:33.133 00:14:33.133 real 0m11.379s 00:14:33.133 user 0m11.411s 00:14:33.133 sys 0m0.770s 00:14:33.133 ************************************ 00:14:33.133 END TEST bdev_error 00:14:33.133 ************************************ 00:14:33.133 16:30:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.133 16:30:09 -- common/autotest_common.sh@10 -- # set +x 00:14:33.133 16:30:09 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:14:33.133 16:30:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:33.133 16:30:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:33.133 16:30:09 -- common/autotest_common.sh@10 -- # set +x 00:14:33.133 ************************************ 00:14:33.133 START TEST bdev_stat 00:14:33.133 ************************************ 00:14:33.133 16:30:09 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:14:33.133 16:30:09 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:14:33.133 16:30:09 -- bdev/blockdev.sh@594 -- # STAT_PID=114188 00:14:33.133 16:30:09 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 114188' 00:14:33.133 16:30:09 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:33.133 Process Bdev IO statistics testing pid: 114188 00:14:33.133 16:30:09 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:33.133 16:30:09 -- bdev/blockdev.sh@597 -- # waitforlisten 114188 00:14:33.133 16:30:09 -- common/autotest_common.sh@819 -- # '[' -z 114188 ']' 00:14:33.133 16:30:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.133 16:30:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:33.133 16:30:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.133 16:30:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:33.133 16:30:09 -- common/autotest_common.sh@10 -- # set +x 00:14:33.133 [2024-07-11 16:30:09.889580] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:33.133 [2024-07-11 16:30:09.889969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114188 ] 00:14:33.392 [2024-07-11 16:30:10.064727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:33.650 [2024-07-11 16:30:10.293432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.650 [2024-07-11 16:30:10.293450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.217 16:30:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:34.217 16:30:10 -- common/autotest_common.sh@852 -- # return 0 00:14:34.217 16:30:10 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:34.217 16:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.217 16:30:10 -- common/autotest_common.sh@10 -- # set +x 00:14:34.217 Malloc_STAT 00:14:34.217 16:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.217 16:30:10 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:14:34.217 16:30:10 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:14:34.217 16:30:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:34.217 16:30:10 -- common/autotest_common.sh@889 -- # local i 00:14:34.217 16:30:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:34.217 16:30:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:34.217 16:30:10 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:14:34.217 16:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.217 16:30:10 -- common/autotest_common.sh@10 -- # set +x 00:14:34.217 16:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.218 16:30:10 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:34.218 16:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.218 16:30:10 -- common/autotest_common.sh@10 -- # set +x 00:14:34.218 [ 00:14:34.218 { 00:14:34.218 "name": "Malloc_STAT", 00:14:34.218 "aliases": [ 00:14:34.218 "5eb506a0-9fda-468e-b237-91d67c8d01e1" 00:14:34.218 ], 00:14:34.218 "product_name": "Malloc disk", 00:14:34.218 "block_size": 512, 00:14:34.218 "num_blocks": 262144, 00:14:34.218 "uuid": "5eb506a0-9fda-468e-b237-91d67c8d01e1", 00:14:34.218 "assigned_rate_limits": { 00:14:34.218 "rw_ios_per_sec": 0, 00:14:34.218 "rw_mbytes_per_sec": 0, 00:14:34.218 "r_mbytes_per_sec": 0, 00:14:34.218 "w_mbytes_per_sec": 0 00:14:34.218 }, 00:14:34.218 "claimed": false, 00:14:34.218 "zoned": false, 00:14:34.218 "supported_io_types": { 00:14:34.218 "read": true, 00:14:34.218 "write": true, 00:14:34.218 "unmap": true, 00:14:34.218 "write_zeroes": true, 00:14:34.218 "flush": true, 00:14:34.218 "reset": true, 00:14:34.218 "compare": false, 00:14:34.218 "compare_and_write": false, 00:14:34.218 "abort": true, 00:14:34.218 "nvme_admin": false, 00:14:34.218 "nvme_io": false 00:14:34.218 }, 00:14:34.218 "memory_domains": [ 00:14:34.218 { 00:14:34.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.218 "dma_device_type": 2 00:14:34.218 } 00:14:34.218 ], 00:14:34.218 "driver_specific": {} 00:14:34.218 } 00:14:34.218 ] 00:14:34.218 16:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.218 16:30:10 -- common/autotest_common.sh@895 -- # return 0 00:14:34.218 16:30:10 -- bdev/blockdev.sh@603 -- # sleep 2 00:14:34.218 16:30:10 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:34.218 Running I/O for 10 seconds... 00:14:36.121 16:30:12 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:14:36.121 16:30:12 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:14:36.121 16:30:12 -- bdev/blockdev.sh@558 -- # local iostats 00:14:36.121 16:30:12 -- bdev/blockdev.sh@559 -- # local io_count1 00:14:36.121 16:30:12 -- bdev/blockdev.sh@560 -- # local io_count2 00:14:36.121 16:30:12 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:14:36.121 16:30:12 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:14:36.121 16:30:12 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:14:36.121 16:30:12 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:14:36.121 16:30:12 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:36.121 16:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.121 16:30:12 -- common/autotest_common.sh@10 -- # set +x 00:14:36.121 16:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.121 16:30:12 -- bdev/blockdev.sh@566 -- # iostats='{ 00:14:36.121 "tick_rate": 2200000000, 00:14:36.121 "ticks": 1755845674070, 00:14:36.121 "bdevs": [ 00:14:36.121 { 00:14:36.121 "name": "Malloc_STAT", 00:14:36.121 "bytes_read": 1000378880, 00:14:36.121 "num_read_ops": 244227, 00:14:36.121 "bytes_written": 0, 00:14:36.122 "num_write_ops": 0, 00:14:36.122 "bytes_unmapped": 0, 00:14:36.122 "num_unmap_ops": 0, 00:14:36.122 "bytes_copied": 0, 00:14:36.122 "num_copy_ops": 0, 00:14:36.122 "read_latency_ticks": 2168962850493, 00:14:36.122 "max_read_latency_ticks": 14070670, 00:14:36.122 "min_read_latency_ticks": 314954, 00:14:36.122 "write_latency_ticks": 0, 00:14:36.122 "max_write_latency_ticks": 0, 00:14:36.122 "min_write_latency_ticks": 0, 00:14:36.122 "unmap_latency_ticks": 0, 00:14:36.122 "max_unmap_latency_ticks": 0, 00:14:36.122 "min_unmap_latency_ticks": 0, 00:14:36.122 "copy_latency_ticks": 0, 00:14:36.122 "max_copy_latency_ticks": 0, 00:14:36.122 "min_copy_latency_ticks": 0, 00:14:36.122 "io_error": {} 00:14:36.122 } 00:14:36.122 ] 00:14:36.122 }' 00:14:36.122 16:30:12 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:14:36.381 16:30:12 -- bdev/blockdev.sh@567 -- # io_count1=244227 00:14:36.381 16:30:12 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:36.381 16:30:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.381 16:30:12 -- common/autotest_common.sh@10 -- # set +x 00:14:36.381 16:30:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.381 16:30:12 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:14:36.381 "tick_rate": 2200000000, 00:14:36.381 "ticks": 1756005295514, 00:14:36.381 "name": "Malloc_STAT", 00:14:36.381 "channels": [ 00:14:36.381 { 00:14:36.381 "thread_id": 2, 00:14:36.381 "bytes_read": 515899392, 00:14:36.381 "num_read_ops": 125952, 00:14:36.381 "bytes_written": 0, 00:14:36.381 "num_write_ops": 0, 00:14:36.381 "bytes_unmapped": 0, 00:14:36.381 "num_unmap_ops": 0, 00:14:36.381 "bytes_copied": 0, 00:14:36.381 "num_copy_ops": 0, 00:14:36.381 "read_latency_ticks": 1124206863691, 00:14:36.381 "max_read_latency_ticks": 19717812, 00:14:36.381 "min_read_latency_ticks": 6829434, 00:14:36.381 "write_latency_ticks": 0, 00:14:36.381 "max_write_latency_ticks": 0, 00:14:36.381 "min_write_latency_ticks": 0, 00:14:36.381 "unmap_latency_ticks": 0, 00:14:36.381 "max_unmap_latency_ticks": 0, 00:14:36.381 "min_unmap_latency_ticks": 0, 00:14:36.381 "copy_latency_ticks": 0, 00:14:36.381 "max_copy_latency_ticks": 0, 00:14:36.381 "min_copy_latency_ticks": 0 00:14:36.381 }, 00:14:36.381 { 00:14:36.381 "thread_id": 3, 00:14:36.381 "bytes_read": 519045120, 00:14:36.381 "num_read_ops": 126720, 00:14:36.381 "bytes_written": 0, 00:14:36.381 "num_write_ops": 0, 00:14:36.381 "bytes_unmapped": 0, 00:14:36.381 "num_unmap_ops": 0, 00:14:36.381 "bytes_copied": 0, 00:14:36.381 "num_copy_ops": 0, 00:14:36.381 "read_latency_ticks": 1126440109564, 00:14:36.381 "max_read_latency_ticks": 15552762, 00:14:36.381 "min_read_latency_ticks": 6843562, 00:14:36.381 "write_latency_ticks": 0, 00:14:36.381 "max_write_latency_ticks": 0, 00:14:36.381 "min_write_latency_ticks": 0, 00:14:36.381 "unmap_latency_ticks": 0, 00:14:36.381 "max_unmap_latency_ticks": 0, 00:14:36.381 "min_unmap_latency_ticks": 0, 00:14:36.381 "copy_latency_ticks": 0, 00:14:36.381 "max_copy_latency_ticks": 0, 00:14:36.381 "min_copy_latency_ticks": 0 00:14:36.381 } 00:14:36.381 ] 00:14:36.381 }' 00:14:36.381 16:30:12 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:14:36.381 16:30:13 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=125952 00:14:36.381 16:30:13 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=125952 00:14:36.381 16:30:13 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:14:36.381 16:30:13 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=126720 00:14:36.381 16:30:13 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=252672 00:14:36.381 16:30:13 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:36.381 16:30:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.381 16:30:13 -- common/autotest_common.sh@10 -- # set +x 00:14:36.381 16:30:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.381 16:30:13 -- bdev/blockdev.sh@575 -- # iostats='{ 00:14:36.381 "tick_rate": 2200000000, 00:14:36.381 "ticks": 1756288318354, 00:14:36.381 "bdevs": [ 00:14:36.381 { 00:14:36.381 "name": "Malloc_STAT", 00:14:36.381 "bytes_read": 1098945024, 00:14:36.381 "num_read_ops": 268291, 00:14:36.381 "bytes_written": 0, 00:14:36.381 "num_write_ops": 0, 00:14:36.381 "bytes_unmapped": 0, 00:14:36.381 "num_unmap_ops": 0, 00:14:36.381 "bytes_copied": 0, 00:14:36.381 "num_copy_ops": 0, 00:14:36.381 "read_latency_ticks": 2395127680971, 00:14:36.381 "max_read_latency_ticks": 19717812, 00:14:36.381 "min_read_latency_ticks": 314954, 00:14:36.381 "write_latency_ticks": 0, 00:14:36.381 "max_write_latency_ticks": 0, 00:14:36.381 "min_write_latency_ticks": 0, 00:14:36.381 "unmap_latency_ticks": 0, 00:14:36.381 "max_unmap_latency_ticks": 0, 00:14:36.381 "min_unmap_latency_ticks": 0, 00:14:36.381 "copy_latency_ticks": 0, 00:14:36.381 "max_copy_latency_ticks": 0, 00:14:36.381 "min_copy_latency_ticks": 0, 00:14:36.381 "io_error": {} 00:14:36.381 } 00:14:36.381 ] 00:14:36.381 }' 00:14:36.381 16:30:13 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:14:36.381 16:30:13 -- bdev/blockdev.sh@576 -- # io_count2=268291 00:14:36.381 16:30:13 -- bdev/blockdev.sh@581 -- # '[' 252672 -lt 244227 ']' 00:14:36.381 16:30:13 -- bdev/blockdev.sh@581 -- # '[' 252672 -gt 268291 ']' 00:14:36.381 16:30:13 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:36.381 16:30:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.381 16:30:13 -- common/autotest_common.sh@10 -- # set +x 00:14:36.381 00:14:36.381 Latency(us) 00:14:36.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.381 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:36.381 Malloc_STAT : 2.20 62665.14 244.79 0.00 0.00 4076.33 1109.64 8996.31 00:14:36.381 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:36.381 Malloc_STAT : 2.20 63218.92 246.95 0.00 0.00 4040.90 651.64 7089.80 00:14:36.381 =================================================================================================================== 00:14:36.381 Total : 125884.05 491.73 0.00 0.00 4058.53 651.64 8996.31 00:14:36.640 0 00:14:36.640 16:30:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.640 16:30:13 -- bdev/blockdev.sh@607 -- # killprocess 114188 00:14:36.640 16:30:13 -- common/autotest_common.sh@926 -- # '[' -z 114188 ']' 00:14:36.640 16:30:13 -- common/autotest_common.sh@930 -- # kill -0 114188 00:14:36.640 16:30:13 -- common/autotest_common.sh@931 -- # uname 00:14:36.640 16:30:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:36.640 16:30:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114188 00:14:36.640 16:30:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:36.640 killing process with pid 114188 00:14:36.640 Received shutdown signal, test time was about 2.318160 seconds 00:14:36.640 00:14:36.640 Latency(us) 00:14:36.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.640 =================================================================================================================== 00:14:36.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:36.640 16:30:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:36.640 16:30:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114188' 00:14:36.640 16:30:13 -- common/autotest_common.sh@945 -- # kill 114188 00:14:36.640 16:30:13 -- common/autotest_common.sh@950 -- # wait 114188 00:14:37.573 ************************************ 00:14:37.573 END TEST bdev_stat 00:14:37.573 ************************************ 00:14:37.573 16:30:14 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:14:37.573 00:14:37.573 real 0m4.545s 00:14:37.573 user 0m8.551s 00:14:37.573 sys 0m0.380s 00:14:37.573 16:30:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.573 16:30:14 -- common/autotest_common.sh@10 -- # set +x 00:14:37.832 16:30:14 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:14:37.832 16:30:14 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:14:37.832 16:30:14 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:14:37.832 16:30:14 -- bdev/blockdev.sh@809 -- # cleanup 00:14:37.832 16:30:14 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:37.832 16:30:14 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:37.832 16:30:14 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:14:37.832 16:30:14 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:14:37.832 16:30:14 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:14:37.832 16:30:14 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:14:37.832 00:14:37.832 real 2m19.405s 00:14:37.832 user 5m46.982s 00:14:37.832 sys 0m20.976s 00:14:37.832 ************************************ 00:14:37.832 END TEST blockdev_general 00:14:37.832 ************************************ 00:14:37.832 16:30:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:37.832 16:30:14 -- common/autotest_common.sh@10 -- # set +x 00:14:37.832 16:30:14 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:37.832 16:30:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:37.832 16:30:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:37.832 16:30:14 -- common/autotest_common.sh@10 -- # set +x 00:14:37.832 ************************************ 00:14:37.832 START TEST bdev_raid 00:14:37.832 ************************************ 00:14:37.832 16:30:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:37.832 * Looking for test storage... 00:14:37.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:37.832 16:30:14 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:37.832 16:30:14 -- bdev/nbd_common.sh@6 -- # set -e 00:14:37.832 16:30:14 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:37.832 16:30:14 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:14:37.832 16:30:14 -- bdev/bdev_raid.sh@716 -- # uname -s 00:14:37.832 16:30:14 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:14:37.832 16:30:14 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:14:37.832 16:30:14 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:14:37.832 16:30:14 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:14:37.832 16:30:14 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:37.832 16:30:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:37.832 16:30:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:37.832 16:30:14 -- common/autotest_common.sh@10 -- # set +x 00:14:37.832 ************************************ 00:14:37.832 START TEST raid_function_test_raid0 00:14:37.832 ************************************ 00:14:37.832 16:30:14 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:14:37.832 16:30:14 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:14:37.832 16:30:14 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:37.833 16:30:14 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:37.833 16:30:14 -- bdev/bdev_raid.sh@86 -- # raid_pid=114353 00:14:37.833 16:30:14 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:37.833 Process raid pid: 114353 00:14:37.833 16:30:14 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 114353' 00:14:37.833 16:30:14 -- bdev/bdev_raid.sh@88 -- # waitforlisten 114353 /var/tmp/spdk-raid.sock 00:14:37.833 16:30:14 -- common/autotest_common.sh@819 -- # '[' -z 114353 ']' 00:14:37.833 16:30:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:37.833 16:30:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:37.833 16:30:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:37.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:37.833 16:30:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:37.833 16:30:14 -- common/autotest_common.sh@10 -- # set +x 00:14:37.833 [2024-07-11 16:30:14.636905] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:37.833 [2024-07-11 16:30:14.637306] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.091 [2024-07-11 16:30:14.794815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.351 [2024-07-11 16:30:14.962932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.351 [2024-07-11 16:30:15.127665] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.917 16:30:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:38.917 16:30:15 -- common/autotest_common.sh@852 -- # return 0 00:14:38.917 16:30:15 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:14:38.917 16:30:15 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:14:38.917 16:30:15 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:38.917 16:30:15 -- bdev/bdev_raid.sh@70 -- # cat 00:14:38.917 16:30:15 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:39.175 [2024-07-11 16:30:15.910785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:39.175 [2024-07-11 16:30:15.912663] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:39.175 [2024-07-11 16:30:15.912849] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:39.175 [2024-07-11 16:30:15.913003] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:39.175 [2024-07-11 16:30:15.913205] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:39.175 [2024-07-11 16:30:15.913567] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:39.175 [2024-07-11 16:30:15.913711] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:14:39.175 [2024-07-11 16:30:15.913944] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.175 Base_1 00:14:39.175 Base_2 00:14:39.175 16:30:15 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:39.175 16:30:15 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:39.175 16:30:15 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:39.434 16:30:16 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:39.434 16:30:16 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:39.434 16:30:16 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:39.434 16:30:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:39.434 16:30:16 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:14:39.434 16:30:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:39.434 16:30:16 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:14:39.434 16:30:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:39.434 16:30:16 -- bdev/nbd_common.sh@12 -- # local i 00:14:39.434 16:30:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:39.434 16:30:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:39.434 16:30:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:39.692 [2024-07-11 16:30:16.398854] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:39.692 /dev/nbd0 00:14:39.692 16:30:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:39.692 16:30:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:39.692 16:30:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:39.692 16:30:16 -- common/autotest_common.sh@857 -- # local i 00:14:39.692 16:30:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:39.692 16:30:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:39.692 16:30:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:39.692 16:30:16 -- common/autotest_common.sh@861 -- # break 00:14:39.692 16:30:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:39.692 16:30:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:39.692 16:30:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.692 1+0 records in 00:14:39.692 1+0 records out 00:14:39.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535262 s, 7.7 MB/s 00:14:39.693 16:30:16 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.693 16:30:16 -- common/autotest_common.sh@874 -- # size=4096 00:14:39.693 16:30:16 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.693 16:30:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:39.693 16:30:16 -- common/autotest_common.sh@877 -- # return 0 00:14:39.693 16:30:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.693 16:30:16 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:39.693 16:30:16 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:39.693 16:30:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:39.693 16:30:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:39.951 16:30:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:39.951 { 00:14:39.951 "nbd_device": "/dev/nbd0", 00:14:39.951 "bdev_name": "raid" 00:14:39.951 } 00:14:39.951 ]' 00:14:39.951 16:30:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:39.951 { 00:14:39.951 "nbd_device": "/dev/nbd0", 00:14:39.951 "bdev_name": "raid" 00:14:39.951 } 00:14:39.951 ]' 00:14:39.951 16:30:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:39.951 16:30:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:39.951 16:30:16 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:39.951 16:30:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:39.951 16:30:16 -- bdev/nbd_common.sh@65 -- # count=1 00:14:39.951 16:30:16 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:39.951 4096+0 records in 00:14:39.951 4096+0 records out 00:14:39.951 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.025658 s, 81.7 MB/s 00:14:39.951 16:30:16 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:40.210 4096+0 records in 00:14:40.210 4096+0 records out 00:14:40.210 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.258857 s, 8.1 MB/s 00:14:40.210 16:30:16 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:40.210 16:30:16 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:40.210 16:30:16 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:40.210 16:30:16 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:40.210 16:30:16 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:40.210 16:30:16 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:40.210 16:30:16 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:40.210 128+0 records in 00:14:40.210 128+0 records out 00:14:40.210 65536 bytes (66 kB, 64 KiB) copied, 0.000846892 s, 77.4 MB/s 00:14:40.210 16:30:16 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:40.210 16:30:16 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:40.210 16:30:16 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:40.210 16:30:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:40.210 16:30:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:40.210 16:30:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:40.210 16:30:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:40.210 16:30:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:40.469 2035+0 records in 00:14:40.469 2035+0 records out 00:14:40.469 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00886549 s, 118 MB/s 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:40.469 456+0 records in 00:14:40.469 456+0 records out 00:14:40.469 233472 bytes (233 kB, 228 KiB) copied, 0.00246002 s, 94.9 MB/s 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:40.469 16:30:17 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:40.469 16:30:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:40.469 16:30:17 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:40.469 16:30:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:40.469 16:30:17 -- bdev/nbd_common.sh@51 -- # local i 00:14:40.469 16:30:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:40.469 16:30:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:40.728 16:30:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:40.728 16:30:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:40.728 16:30:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:40.728 16:30:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.728 16:30:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.728 16:30:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:40.728 [2024-07-11 16:30:17.346393] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.728 16:30:17 -- bdev/nbd_common.sh@41 -- # break 00:14:40.728 16:30:17 -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.728 16:30:17 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:40.728 16:30:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:40.728 16:30:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:40.987 16:30:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:40.987 16:30:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:40.987 16:30:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:40.987 16:30:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:40.987 16:30:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:40.987 16:30:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:40.987 16:30:17 -- bdev/nbd_common.sh@65 -- # true 00:14:40.987 16:30:17 -- bdev/nbd_common.sh@65 -- # count=0 00:14:40.987 16:30:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:40.987 16:30:17 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:40.987 16:30:17 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:40.987 16:30:17 -- bdev/bdev_raid.sh@111 -- # killprocess 114353 00:14:40.987 16:30:17 -- common/autotest_common.sh@926 -- # '[' -z 114353 ']' 00:14:40.987 16:30:17 -- common/autotest_common.sh@930 -- # kill -0 114353 00:14:40.987 16:30:17 -- common/autotest_common.sh@931 -- # uname 00:14:40.987 16:30:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:40.987 16:30:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114353 00:14:40.987 killing process with pid 114353 00:14:40.987 16:30:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:40.987 16:30:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:40.987 16:30:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114353' 00:14:40.987 16:30:17 -- common/autotest_common.sh@945 -- # kill 114353 00:14:40.987 16:30:17 -- common/autotest_common.sh@950 -- # wait 114353 00:14:40.987 [2024-07-11 16:30:17.669442] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:40.987 [2024-07-11 16:30:17.669529] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.987 [2024-07-11 16:30:17.669609] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.987 [2024-07-11 16:30:17.669622] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:14:41.246 [2024-07-11 16:30:17.795940] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:42.182 ************************************ 00:14:42.182 END TEST raid_function_test_raid0 00:14:42.182 ************************************ 00:14:42.182 16:30:18 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:42.182 00:14:42.183 real 0m4.131s 00:14:42.183 user 0m5.464s 00:14:42.183 sys 0m0.789s 00:14:42.183 16:30:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.183 16:30:18 -- common/autotest_common.sh@10 -- # set +x 00:14:42.183 16:30:18 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:14:42.183 16:30:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:42.183 16:30:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:42.183 16:30:18 -- common/autotest_common.sh@10 -- # set +x 00:14:42.183 ************************************ 00:14:42.183 START TEST raid_function_test_concat 00:14:42.183 ************************************ 00:14:42.183 Process raid pid: 114510 00:14:42.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:42.183 16:30:18 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:14:42.183 16:30:18 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:14:42.183 16:30:18 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:14:42.183 16:30:18 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:14:42.183 16:30:18 -- bdev/bdev_raid.sh@86 -- # raid_pid=114510 00:14:42.183 16:30:18 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 114510' 00:14:42.183 16:30:18 -- bdev/bdev_raid.sh@88 -- # waitforlisten 114510 /var/tmp/spdk-raid.sock 00:14:42.183 16:30:18 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:42.183 16:30:18 -- common/autotest_common.sh@819 -- # '[' -z 114510 ']' 00:14:42.183 16:30:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:42.183 16:30:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:42.183 16:30:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:42.183 16:30:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:42.183 16:30:18 -- common/autotest_common.sh@10 -- # set +x 00:14:42.183 [2024-07-11 16:30:18.812734] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:42.183 [2024-07-11 16:30:18.813147] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:42.183 [2024-07-11 16:30:18.977936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.441 [2024-07-11 16:30:19.136882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.700 [2024-07-11 16:30:19.309243] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:42.959 16:30:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:42.959 16:30:19 -- common/autotest_common.sh@852 -- # return 0 00:14:42.959 16:30:19 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:14:42.959 16:30:19 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:14:42.959 16:30:19 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:42.959 16:30:19 -- bdev/bdev_raid.sh@70 -- # cat 00:14:42.959 16:30:19 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:43.527 [2024-07-11 16:30:20.063268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:43.527 [2024-07-11 16:30:20.065297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:43.527 [2024-07-11 16:30:20.065518] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:43.527 [2024-07-11 16:30:20.065625] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:43.527 [2024-07-11 16:30:20.065798] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:43.527 [2024-07-11 16:30:20.066168] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:43.527 [2024-07-11 16:30:20.066278] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:14:43.527 [2024-07-11 16:30:20.066517] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.527 Base_1 00:14:43.527 Base_2 00:14:43.527 16:30:20 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:43.527 16:30:20 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:14:43.527 16:30:20 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:43.527 16:30:20 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:14:43.527 16:30:20 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:14:43.527 16:30:20 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:43.527 16:30:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:43.527 16:30:20 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:14:43.527 16:30:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:43.527 16:30:20 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:14:43.527 16:30:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:43.527 16:30:20 -- bdev/nbd_common.sh@12 -- # local i 00:14:43.527 16:30:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:43.527 16:30:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.527 16:30:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:43.786 [2024-07-11 16:30:20.579337] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:44.045 /dev/nbd0 00:14:44.045 16:30:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:44.045 16:30:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:44.045 16:30:20 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:44.045 16:30:20 -- common/autotest_common.sh@857 -- # local i 00:14:44.045 16:30:20 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:44.045 16:30:20 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:44.045 16:30:20 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:44.045 16:30:20 -- common/autotest_common.sh@861 -- # break 00:14:44.045 16:30:20 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:44.045 16:30:20 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:44.045 16:30:20 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.045 1+0 records in 00:14:44.045 1+0 records out 00:14:44.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542935 s, 7.5 MB/s 00:14:44.045 16:30:20 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.045 16:30:20 -- common/autotest_common.sh@874 -- # size=4096 00:14:44.045 16:30:20 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.045 16:30:20 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:44.045 16:30:20 -- common/autotest_common.sh@877 -- # return 0 00:14:44.045 16:30:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.045 16:30:20 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.045 16:30:20 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:44.045 16:30:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:44.045 16:30:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:44.045 16:30:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:44.045 { 00:14:44.045 "nbd_device": "/dev/nbd0", 00:14:44.045 "bdev_name": "raid" 00:14:44.045 } 00:14:44.045 ]' 00:14:44.045 16:30:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:44.045 { 00:14:44.045 "nbd_device": "/dev/nbd0", 00:14:44.045 "bdev_name": "raid" 00:14:44.045 } 00:14:44.045 ]' 00:14:44.045 16:30:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:44.304 16:30:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:44.304 16:30:20 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:44.304 16:30:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:44.304 16:30:20 -- bdev/nbd_common.sh@65 -- # count=1 00:14:44.304 16:30:20 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@98 -- # count=1 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@20 -- # local blksize 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:14:44.304 4096+0 records in 00:14:44.304 4096+0 records out 00:14:44.304 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0268879 s, 78.0 MB/s 00:14:44.304 16:30:20 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:44.562 4096+0 records in 00:14:44.562 4096+0 records out 00:14:44.562 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.180094 s, 11.6 MB/s 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:44.563 128+0 records in 00:14:44.563 128+0 records out 00:14:44.563 65536 bytes (66 kB, 64 KiB) copied, 0.000896664 s, 73.1 MB/s 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:44.563 2035+0 records in 00:14:44.563 2035+0 records out 00:14:44.563 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00735477 s, 142 MB/s 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:44.563 456+0 records in 00:14:44.563 456+0 records out 00:14:44.563 233472 bytes (233 kB, 228 KiB) copied, 0.0023432 s, 99.6 MB/s 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@53 -- # return 0 00:14:44.563 16:30:21 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@51 -- # local i 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.563 16:30:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:44.822 16:30:21 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:14:44.822 [2024-07-11 16:30:21.374382] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.822 16:30:21 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:14:44.822 16:30:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.822 16:30:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:44.822 16:30:21 -- bdev/nbd_common.sh@41 -- # break 00:14:44.822 16:30:21 -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.822 16:30:21 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:44.822 16:30:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:44.822 16:30:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:45.080 16:30:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:45.080 16:30:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:45.080 16:30:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:45.080 16:30:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:45.080 16:30:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:45.080 16:30:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:45.080 16:30:21 -- bdev/nbd_common.sh@65 -- # true 00:14:45.080 16:30:21 -- bdev/nbd_common.sh@65 -- # count=0 00:14:45.080 16:30:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:45.080 16:30:21 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:45.080 16:30:21 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:45.080 16:30:21 -- bdev/bdev_raid.sh@111 -- # killprocess 114510 00:14:45.080 16:30:21 -- common/autotest_common.sh@926 -- # '[' -z 114510 ']' 00:14:45.080 16:30:21 -- common/autotest_common.sh@930 -- # kill -0 114510 00:14:45.080 16:30:21 -- common/autotest_common.sh@931 -- # uname 00:14:45.080 16:30:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:45.080 16:30:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114510 00:14:45.080 killing process with pid 114510 00:14:45.080 16:30:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:45.081 16:30:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:45.081 16:30:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114510' 00:14:45.081 16:30:21 -- common/autotest_common.sh@945 -- # kill 114510 00:14:45.081 16:30:21 -- common/autotest_common.sh@950 -- # wait 114510 00:14:45.081 [2024-07-11 16:30:21.744496] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.081 [2024-07-11 16:30:21.744587] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.081 [2024-07-11 16:30:21.744683] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.081 [2024-07-11 16:30:21.744817] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:14:45.081 [2024-07-11 16:30:21.873106] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.016 ************************************ 00:14:46.016 END TEST raid_function_test_concat 00:14:46.016 ************************************ 00:14:46.016 16:30:22 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:46.016 00:14:46.016 real 0m4.037s 00:14:46.016 user 0m5.327s 00:14:46.016 sys 0m0.692s 00:14:46.016 16:30:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.016 16:30:22 -- common/autotest_common.sh@10 -- # set +x 00:14:46.274 16:30:22 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:14:46.274 16:30:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:46.274 16:30:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:46.274 16:30:22 -- common/autotest_common.sh@10 -- # set +x 00:14:46.274 ************************************ 00:14:46.274 START TEST raid0_resize_test 00:14:46.274 ************************************ 00:14:46.274 16:30:22 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:14:46.274 16:30:22 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:14:46.274 16:30:22 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:14:46.274 16:30:22 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:14:46.274 16:30:22 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:14:46.274 16:30:22 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:14:46.274 16:30:22 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:14:46.274 16:30:22 -- bdev/bdev_raid.sh@301 -- # raid_pid=114680 00:14:46.274 16:30:22 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 114680' 00:14:46.274 16:30:22 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:46.274 Process raid pid: 114680 00:14:46.275 16:30:22 -- bdev/bdev_raid.sh@303 -- # waitforlisten 114680 /var/tmp/spdk-raid.sock 00:14:46.275 16:30:22 -- common/autotest_common.sh@819 -- # '[' -z 114680 ']' 00:14:46.275 16:30:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:46.275 16:30:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:46.275 16:30:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:46.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:46.275 16:30:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:46.275 16:30:22 -- common/autotest_common.sh@10 -- # set +x 00:14:46.275 [2024-07-11 16:30:22.910051] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:46.275 [2024-07-11 16:30:22.910480] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.275 [2024-07-11 16:30:23.078180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.532 [2024-07-11 16:30:23.237970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.790 [2024-07-11 16:30:23.402480] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.048 16:30:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:47.048 16:30:23 -- common/autotest_common.sh@852 -- # return 0 00:14:47.048 16:30:23 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:47.306 Base_1 00:14:47.306 16:30:24 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:47.564 Base_2 00:14:47.564 16:30:24 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:47.823 [2024-07-11 16:30:24.424614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:47.823 [2024-07-11 16:30:24.426257] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:47.823 [2024-07-11 16:30:24.426431] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:47.823 [2024-07-11 16:30:24.426528] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:47.823 [2024-07-11 16:30:24.426686] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:14:47.823 [2024-07-11 16:30:24.427028] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:47.823 [2024-07-11 16:30:24.427163] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:14:47.823 [2024-07-11 16:30:24.427403] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.823 16:30:24 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:47.823 [2024-07-11 16:30:24.604643] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:47.823 [2024-07-11 16:30:24.604770] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:47.823 true 00:14:47.823 16:30:24 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:47.823 16:30:24 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:14:48.081 [2024-07-11 16:30:24.820769] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.081 16:30:24 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:14:48.081 16:30:24 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:14:48.081 16:30:24 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:14:48.081 16:30:24 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:48.339 [2024-07-11 16:30:25.012696] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:48.339 [2024-07-11 16:30:25.012818] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:48.339 [2024-07-11 16:30:25.012974] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:14:48.339 [2024-07-11 16:30:25.013159] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:48.339 true 00:14:48.339 16:30:25 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:48.339 16:30:25 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:14:48.597 [2024-07-11 16:30:25.240852] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.597 16:30:25 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:14:48.597 16:30:25 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:14:48.597 16:30:25 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:14:48.597 16:30:25 -- bdev/bdev_raid.sh@332 -- # killprocess 114680 00:14:48.597 16:30:25 -- common/autotest_common.sh@926 -- # '[' -z 114680 ']' 00:14:48.597 16:30:25 -- common/autotest_common.sh@930 -- # kill -0 114680 00:14:48.597 16:30:25 -- common/autotest_common.sh@931 -- # uname 00:14:48.597 16:30:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:48.597 16:30:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114680 00:14:48.597 killing process with pid 114680 00:14:48.597 16:30:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:48.597 16:30:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:48.597 16:30:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114680' 00:14:48.597 16:30:25 -- common/autotest_common.sh@945 -- # kill 114680 00:14:48.598 16:30:25 -- common/autotest_common.sh@950 -- # wait 114680 00:14:48.598 [2024-07-11 16:30:25.273291] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.598 [2024-07-11 16:30:25.273444] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.598 [2024-07-11 16:30:25.273533] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.598 [2024-07-11 16:30:25.273543] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:14:48.598 [2024-07-11 16:30:25.274083] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.530 ************************************ 00:14:49.530 END TEST raid0_resize_test 00:14:49.530 ************************************ 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@334 -- # return 0 00:14:49.530 00:14:49.530 real 0m3.339s 00:14:49.530 user 0m4.820s 00:14:49.530 sys 0m0.394s 00:14:49.530 16:30:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.530 16:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:49.530 16:30:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:49.530 16:30:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:49.530 16:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:49.530 ************************************ 00:14:49.530 START TEST raid_state_function_test 00:14:49.530 ************************************ 00:14:49.530 16:30:26 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=114769 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:49.530 Process raid pid: 114769 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 114769' 00:14:49.530 16:30:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 114769 /var/tmp/spdk-raid.sock 00:14:49.530 16:30:26 -- common/autotest_common.sh@819 -- # '[' -z 114769 ']' 00:14:49.530 16:30:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:49.530 16:30:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:49.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:49.530 16:30:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:49.530 16:30:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:49.530 16:30:26 -- common/autotest_common.sh@10 -- # set +x 00:14:49.530 [2024-07-11 16:30:26.301029] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:49.530 [2024-07-11 16:30:26.301431] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.787 [2024-07-11 16:30:26.466260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.044 [2024-07-11 16:30:26.625931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.044 [2024-07-11 16:30:26.790849] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.618 16:30:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:50.618 16:30:27 -- common/autotest_common.sh@852 -- # return 0 00:14:50.618 16:30:27 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:50.884 [2024-07-11 16:30:27.432671] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.884 [2024-07-11 16:30:27.432879] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.884 [2024-07-11 16:30:27.433031] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.884 [2024-07-11 16:30:27.433114] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.884 "name": "Existed_Raid", 00:14:50.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.884 "strip_size_kb": 64, 00:14:50.884 "state": "configuring", 00:14:50.884 "raid_level": "raid0", 00:14:50.884 "superblock": false, 00:14:50.884 "num_base_bdevs": 2, 00:14:50.884 "num_base_bdevs_discovered": 0, 00:14:50.884 "num_base_bdevs_operational": 2, 00:14:50.884 "base_bdevs_list": [ 00:14:50.884 { 00:14:50.884 "name": "BaseBdev1", 00:14:50.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.884 "is_configured": false, 00:14:50.884 "data_offset": 0, 00:14:50.884 "data_size": 0 00:14:50.884 }, 00:14:50.884 { 00:14:50.884 "name": "BaseBdev2", 00:14:50.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.884 "is_configured": false, 00:14:50.884 "data_offset": 0, 00:14:50.884 "data_size": 0 00:14:50.884 } 00:14:50.884 ] 00:14:50.884 }' 00:14:50.884 16:30:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.884 16:30:27 -- common/autotest_common.sh@10 -- # set +x 00:14:51.818 16:30:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:51.818 [2024-07-11 16:30:28.520771] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.818 [2024-07-11 16:30:28.520969] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:51.818 16:30:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:52.076 [2024-07-11 16:30:28.712820] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.076 [2024-07-11 16:30:28.713035] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.076 [2024-07-11 16:30:28.713149] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.076 [2024-07-11 16:30:28.713206] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.076 16:30:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:52.334 [2024-07-11 16:30:28.969830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.334 BaseBdev1 00:14:52.334 16:30:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:52.334 16:30:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:14:52.334 16:30:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:52.334 16:30:28 -- common/autotest_common.sh@889 -- # local i 00:14:52.334 16:30:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:52.334 16:30:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:52.334 16:30:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:52.593 16:30:29 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:52.593 [ 00:14:52.593 { 00:14:52.593 "name": "BaseBdev1", 00:14:52.593 "aliases": [ 00:14:52.593 "8445fa7b-a1eb-46e2-aa94-48a8dead9cf5" 00:14:52.593 ], 00:14:52.593 "product_name": "Malloc disk", 00:14:52.593 "block_size": 512, 00:14:52.593 "num_blocks": 65536, 00:14:52.593 "uuid": "8445fa7b-a1eb-46e2-aa94-48a8dead9cf5", 00:14:52.593 "assigned_rate_limits": { 00:14:52.593 "rw_ios_per_sec": 0, 00:14:52.593 "rw_mbytes_per_sec": 0, 00:14:52.593 "r_mbytes_per_sec": 0, 00:14:52.593 "w_mbytes_per_sec": 0 00:14:52.593 }, 00:14:52.593 "claimed": true, 00:14:52.593 "claim_type": "exclusive_write", 00:14:52.593 "zoned": false, 00:14:52.593 "supported_io_types": { 00:14:52.593 "read": true, 00:14:52.593 "write": true, 00:14:52.593 "unmap": true, 00:14:52.593 "write_zeroes": true, 00:14:52.593 "flush": true, 00:14:52.593 "reset": true, 00:14:52.593 "compare": false, 00:14:52.593 "compare_and_write": false, 00:14:52.593 "abort": true, 00:14:52.593 "nvme_admin": false, 00:14:52.593 "nvme_io": false 00:14:52.593 }, 00:14:52.593 "memory_domains": [ 00:14:52.593 { 00:14:52.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.593 "dma_device_type": 2 00:14:52.593 } 00:14:52.593 ], 00:14:52.593 "driver_specific": {} 00:14:52.593 } 00:14:52.593 ] 00:14:52.593 16:30:29 -- common/autotest_common.sh@895 -- # return 0 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.593 16:30:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.851 16:30:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:52.851 "name": "Existed_Raid", 00:14:52.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.851 "strip_size_kb": 64, 00:14:52.851 "state": "configuring", 00:14:52.851 "raid_level": "raid0", 00:14:52.851 "superblock": false, 00:14:52.851 "num_base_bdevs": 2, 00:14:52.851 "num_base_bdevs_discovered": 1, 00:14:52.851 "num_base_bdevs_operational": 2, 00:14:52.851 "base_bdevs_list": [ 00:14:52.851 { 00:14:52.851 "name": "BaseBdev1", 00:14:52.851 "uuid": "8445fa7b-a1eb-46e2-aa94-48a8dead9cf5", 00:14:52.851 "is_configured": true, 00:14:52.851 "data_offset": 0, 00:14:52.851 "data_size": 65536 00:14:52.851 }, 00:14:52.851 { 00:14:52.851 "name": "BaseBdev2", 00:14:52.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.852 "is_configured": false, 00:14:52.852 "data_offset": 0, 00:14:52.852 "data_size": 0 00:14:52.852 } 00:14:52.852 ] 00:14:52.852 }' 00:14:52.852 16:30:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:52.852 16:30:29 -- common/autotest_common.sh@10 -- # set +x 00:14:53.418 16:30:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:53.677 [2024-07-11 16:30:30.342130] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:53.677 [2024-07-11 16:30:30.342332] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:53.677 16:30:30 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:53.677 16:30:30 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:53.936 [2024-07-11 16:30:30.538174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:53.936 [2024-07-11 16:30:30.539878] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.936 [2024-07-11 16:30:30.540065] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.936 16:30:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.195 16:30:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.195 "name": "Existed_Raid", 00:14:54.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.195 "strip_size_kb": 64, 00:14:54.195 "state": "configuring", 00:14:54.195 "raid_level": "raid0", 00:14:54.195 "superblock": false, 00:14:54.195 "num_base_bdevs": 2, 00:14:54.195 "num_base_bdevs_discovered": 1, 00:14:54.195 "num_base_bdevs_operational": 2, 00:14:54.195 "base_bdevs_list": [ 00:14:54.195 { 00:14:54.195 "name": "BaseBdev1", 00:14:54.195 "uuid": "8445fa7b-a1eb-46e2-aa94-48a8dead9cf5", 00:14:54.195 "is_configured": true, 00:14:54.195 "data_offset": 0, 00:14:54.195 "data_size": 65536 00:14:54.195 }, 00:14:54.195 { 00:14:54.195 "name": "BaseBdev2", 00:14:54.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.195 "is_configured": false, 00:14:54.195 "data_offset": 0, 00:14:54.195 "data_size": 0 00:14:54.195 } 00:14:54.195 ] 00:14:54.195 }' 00:14:54.195 16:30:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.195 16:30:30 -- common/autotest_common.sh@10 -- # set +x 00:14:54.761 16:30:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:55.020 [2024-07-11 16:30:31.658259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.020 [2024-07-11 16:30:31.658427] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:55.020 [2024-07-11 16:30:31.658536] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:55.020 [2024-07-11 16:30:31.658687] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:55.020 [2024-07-11 16:30:31.659113] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:55.020 [2024-07-11 16:30:31.659231] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:14:55.020 [2024-07-11 16:30:31.659623] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.020 BaseBdev2 00:14:55.020 16:30:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:55.020 16:30:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:14:55.020 16:30:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:55.020 16:30:31 -- common/autotest_common.sh@889 -- # local i 00:14:55.020 16:30:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:55.020 16:30:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:55.020 16:30:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:55.278 16:30:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:55.537 [ 00:14:55.537 { 00:14:55.537 "name": "BaseBdev2", 00:14:55.537 "aliases": [ 00:14:55.537 "67b71216-8d33-403f-9793-dd5fd883f1cc" 00:14:55.537 ], 00:14:55.537 "product_name": "Malloc disk", 00:14:55.537 "block_size": 512, 00:14:55.537 "num_blocks": 65536, 00:14:55.537 "uuid": "67b71216-8d33-403f-9793-dd5fd883f1cc", 00:14:55.537 "assigned_rate_limits": { 00:14:55.537 "rw_ios_per_sec": 0, 00:14:55.537 "rw_mbytes_per_sec": 0, 00:14:55.537 "r_mbytes_per_sec": 0, 00:14:55.537 "w_mbytes_per_sec": 0 00:14:55.537 }, 00:14:55.537 "claimed": true, 00:14:55.537 "claim_type": "exclusive_write", 00:14:55.537 "zoned": false, 00:14:55.537 "supported_io_types": { 00:14:55.537 "read": true, 00:14:55.537 "write": true, 00:14:55.537 "unmap": true, 00:14:55.537 "write_zeroes": true, 00:14:55.537 "flush": true, 00:14:55.537 "reset": true, 00:14:55.537 "compare": false, 00:14:55.537 "compare_and_write": false, 00:14:55.538 "abort": true, 00:14:55.538 "nvme_admin": false, 00:14:55.538 "nvme_io": false 00:14:55.538 }, 00:14:55.538 "memory_domains": [ 00:14:55.538 { 00:14:55.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.538 "dma_device_type": 2 00:14:55.538 } 00:14:55.538 ], 00:14:55.538 "driver_specific": {} 00:14:55.538 } 00:14:55.538 ] 00:14:55.538 16:30:32 -- common/autotest_common.sh@895 -- # return 0 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.538 16:30:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.797 16:30:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:55.797 "name": "Existed_Raid", 00:14:55.797 "uuid": "f0be1ea2-52b7-475f-8da2-92abc9b03571", 00:14:55.797 "strip_size_kb": 64, 00:14:55.797 "state": "online", 00:14:55.797 "raid_level": "raid0", 00:14:55.797 "superblock": false, 00:14:55.797 "num_base_bdevs": 2, 00:14:55.797 "num_base_bdevs_discovered": 2, 00:14:55.797 "num_base_bdevs_operational": 2, 00:14:55.797 "base_bdevs_list": [ 00:14:55.797 { 00:14:55.797 "name": "BaseBdev1", 00:14:55.797 "uuid": "8445fa7b-a1eb-46e2-aa94-48a8dead9cf5", 00:14:55.797 "is_configured": true, 00:14:55.797 "data_offset": 0, 00:14:55.797 "data_size": 65536 00:14:55.797 }, 00:14:55.797 { 00:14:55.797 "name": "BaseBdev2", 00:14:55.797 "uuid": "67b71216-8d33-403f-9793-dd5fd883f1cc", 00:14:55.797 "is_configured": true, 00:14:55.797 "data_offset": 0, 00:14:55.797 "data_size": 65536 00:14:55.797 } 00:14:55.797 ] 00:14:55.797 }' 00:14:55.797 16:30:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:55.797 16:30:32 -- common/autotest_common.sh@10 -- # set +x 00:14:56.376 16:30:33 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:56.644 [2024-07-11 16:30:33.362630] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.644 [2024-07-11 16:30:33.362774] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.644 [2024-07-11 16:30:33.362956] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.644 16:30:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:56.644 16:30:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:56.644 16:30:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:56.644 16:30:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:56.644 16:30:33 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.645 16:30:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.902 16:30:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:56.902 "name": "Existed_Raid", 00:14:56.902 "uuid": "f0be1ea2-52b7-475f-8da2-92abc9b03571", 00:14:56.902 "strip_size_kb": 64, 00:14:56.902 "state": "offline", 00:14:56.902 "raid_level": "raid0", 00:14:56.902 "superblock": false, 00:14:56.902 "num_base_bdevs": 2, 00:14:56.902 "num_base_bdevs_discovered": 1, 00:14:56.902 "num_base_bdevs_operational": 1, 00:14:56.902 "base_bdevs_list": [ 00:14:56.902 { 00:14:56.902 "name": null, 00:14:56.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.902 "is_configured": false, 00:14:56.902 "data_offset": 0, 00:14:56.902 "data_size": 65536 00:14:56.902 }, 00:14:56.902 { 00:14:56.902 "name": "BaseBdev2", 00:14:56.902 "uuid": "67b71216-8d33-403f-9793-dd5fd883f1cc", 00:14:56.902 "is_configured": true, 00:14:56.902 "data_offset": 0, 00:14:56.902 "data_size": 65536 00:14:56.902 } 00:14:56.902 ] 00:14:56.902 }' 00:14:56.902 16:30:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:56.902 16:30:33 -- common/autotest_common.sh@10 -- # set +x 00:14:57.836 16:30:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:57.836 16:30:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:57.836 16:30:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.836 16:30:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:58.094 16:30:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:58.094 16:30:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.094 16:30:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:58.094 [2024-07-11 16:30:34.885949] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.094 [2024-07-11 16:30:34.886124] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:14:58.352 16:30:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:58.352 16:30:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:58.352 16:30:34 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.352 16:30:34 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:58.611 16:30:35 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:58.611 16:30:35 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:58.611 16:30:35 -- bdev/bdev_raid.sh@287 -- # killprocess 114769 00:14:58.611 16:30:35 -- common/autotest_common.sh@926 -- # '[' -z 114769 ']' 00:14:58.611 16:30:35 -- common/autotest_common.sh@930 -- # kill -0 114769 00:14:58.611 16:30:35 -- common/autotest_common.sh@931 -- # uname 00:14:58.611 16:30:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:58.611 16:30:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 114769 00:14:58.611 killing process with pid 114769 00:14:58.611 16:30:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:58.611 16:30:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:58.611 16:30:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 114769' 00:14:58.611 16:30:35 -- common/autotest_common.sh@945 -- # kill 114769 00:14:58.611 16:30:35 -- common/autotest_common.sh@950 -- # wait 114769 00:14:58.611 [2024-07-11 16:30:35.212542] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.611 [2024-07-11 16:30:35.212682] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:59.546 ************************************ 00:14:59.546 END TEST raid_state_function_test 00:14:59.546 ************************************ 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:59.546 00:14:59.546 real 0m9.874s 00:14:59.546 user 0m17.441s 00:14:59.546 sys 0m1.109s 00:14:59.546 16:30:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.546 16:30:36 -- common/autotest_common.sh@10 -- # set +x 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:59.546 16:30:36 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:14:59.546 16:30:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:59.546 16:30:36 -- common/autotest_common.sh@10 -- # set +x 00:14:59.546 ************************************ 00:14:59.546 START TEST raid_state_function_test_sb 00:14:59.546 ************************************ 00:14:59.546 16:30:36 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:59.546 16:30:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@226 -- # raid_pid=115104 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115104' 00:14:59.547 Process raid pid: 115104 00:14:59.547 16:30:36 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115104 /var/tmp/spdk-raid.sock 00:14:59.547 16:30:36 -- common/autotest_common.sh@819 -- # '[' -z 115104 ']' 00:14:59.547 16:30:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:59.547 16:30:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:59.547 16:30:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:59.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:59.547 16:30:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:59.547 16:30:36 -- common/autotest_common.sh@10 -- # set +x 00:14:59.547 [2024-07-11 16:30:36.224772] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:59.547 [2024-07-11 16:30:36.225274] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.805 [2024-07-11 16:30:36.378618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.805 [2024-07-11 16:30:36.539999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.063 [2024-07-11 16:30:36.705373] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.629 16:30:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:00.629 16:30:37 -- common/autotest_common.sh@852 -- # return 0 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:00.629 [2024-07-11 16:30:37.376271] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.629 [2024-07-11 16:30:37.376500] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.629 [2024-07-11 16:30:37.376601] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.629 [2024-07-11 16:30:37.376659] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.629 16:30:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.888 16:30:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.888 "name": "Existed_Raid", 00:15:00.888 "uuid": "5d1acfab-0dd3-4643-a32b-2ebd1f996309", 00:15:00.888 "strip_size_kb": 64, 00:15:00.888 "state": "configuring", 00:15:00.888 "raid_level": "raid0", 00:15:00.888 "superblock": true, 00:15:00.888 "num_base_bdevs": 2, 00:15:00.888 "num_base_bdevs_discovered": 0, 00:15:00.888 "num_base_bdevs_operational": 2, 00:15:00.888 "base_bdevs_list": [ 00:15:00.888 { 00:15:00.888 "name": "BaseBdev1", 00:15:00.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.888 "is_configured": false, 00:15:00.888 "data_offset": 0, 00:15:00.888 "data_size": 0 00:15:00.888 }, 00:15:00.888 { 00:15:00.888 "name": "BaseBdev2", 00:15:00.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.888 "is_configured": false, 00:15:00.888 "data_offset": 0, 00:15:00.888 "data_size": 0 00:15:00.888 } 00:15:00.888 ] 00:15:00.888 }' 00:15:00.888 16:30:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.888 16:30:37 -- common/autotest_common.sh@10 -- # set +x 00:15:01.457 16:30:38 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:01.716 [2024-07-11 16:30:38.424324] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.716 [2024-07-11 16:30:38.424492] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:01.716 16:30:38 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:01.975 [2024-07-11 16:30:38.600402] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.975 [2024-07-11 16:30:38.600594] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.975 [2024-07-11 16:30:38.600691] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.975 [2024-07-11 16:30:38.600809] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.975 16:30:38 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:02.234 [2024-07-11 16:30:38.797580] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.234 BaseBdev1 00:15:02.234 16:30:38 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:02.234 16:30:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:02.234 16:30:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:02.234 16:30:38 -- common/autotest_common.sh@889 -- # local i 00:15:02.234 16:30:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:02.234 16:30:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:02.234 16:30:38 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:02.234 16:30:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:02.501 [ 00:15:02.501 { 00:15:02.501 "name": "BaseBdev1", 00:15:02.501 "aliases": [ 00:15:02.501 "66dfac2e-cf9b-4ac0-8426-ca514a182a23" 00:15:02.501 ], 00:15:02.501 "product_name": "Malloc disk", 00:15:02.501 "block_size": 512, 00:15:02.501 "num_blocks": 65536, 00:15:02.501 "uuid": "66dfac2e-cf9b-4ac0-8426-ca514a182a23", 00:15:02.501 "assigned_rate_limits": { 00:15:02.501 "rw_ios_per_sec": 0, 00:15:02.501 "rw_mbytes_per_sec": 0, 00:15:02.501 "r_mbytes_per_sec": 0, 00:15:02.501 "w_mbytes_per_sec": 0 00:15:02.501 }, 00:15:02.501 "claimed": true, 00:15:02.501 "claim_type": "exclusive_write", 00:15:02.501 "zoned": false, 00:15:02.501 "supported_io_types": { 00:15:02.501 "read": true, 00:15:02.501 "write": true, 00:15:02.501 "unmap": true, 00:15:02.501 "write_zeroes": true, 00:15:02.501 "flush": true, 00:15:02.501 "reset": true, 00:15:02.501 "compare": false, 00:15:02.501 "compare_and_write": false, 00:15:02.501 "abort": true, 00:15:02.501 "nvme_admin": false, 00:15:02.501 "nvme_io": false 00:15:02.501 }, 00:15:02.501 "memory_domains": [ 00:15:02.501 { 00:15:02.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.501 "dma_device_type": 2 00:15:02.501 } 00:15:02.501 ], 00:15:02.501 "driver_specific": {} 00:15:02.501 } 00:15:02.501 ] 00:15:02.501 16:30:39 -- common/autotest_common.sh@895 -- # return 0 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.501 16:30:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.772 16:30:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:02.772 "name": "Existed_Raid", 00:15:02.772 "uuid": "b1c4f26e-b01a-45d2-a28b-0394fc275a57", 00:15:02.772 "strip_size_kb": 64, 00:15:02.772 "state": "configuring", 00:15:02.772 "raid_level": "raid0", 00:15:02.772 "superblock": true, 00:15:02.772 "num_base_bdevs": 2, 00:15:02.772 "num_base_bdevs_discovered": 1, 00:15:02.772 "num_base_bdevs_operational": 2, 00:15:02.772 "base_bdevs_list": [ 00:15:02.772 { 00:15:02.772 "name": "BaseBdev1", 00:15:02.772 "uuid": "66dfac2e-cf9b-4ac0-8426-ca514a182a23", 00:15:02.772 "is_configured": true, 00:15:02.772 "data_offset": 2048, 00:15:02.772 "data_size": 63488 00:15:02.772 }, 00:15:02.772 { 00:15:02.772 "name": "BaseBdev2", 00:15:02.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.772 "is_configured": false, 00:15:02.772 "data_offset": 0, 00:15:02.772 "data_size": 0 00:15:02.772 } 00:15:02.772 ] 00:15:02.772 }' 00:15:02.772 16:30:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:02.772 16:30:39 -- common/autotest_common.sh@10 -- # set +x 00:15:03.338 16:30:40 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:03.596 [2024-07-11 16:30:40.257830] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.596 [2024-07-11 16:30:40.257979] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:03.596 16:30:40 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:03.596 16:30:40 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:03.855 16:30:40 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:04.113 BaseBdev1 00:15:04.113 16:30:40 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:04.113 16:30:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:04.113 16:30:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:04.113 16:30:40 -- common/autotest_common.sh@889 -- # local i 00:15:04.113 16:30:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:04.113 16:30:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:04.113 16:30:40 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:04.113 16:30:40 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:04.371 [ 00:15:04.371 { 00:15:04.371 "name": "BaseBdev1", 00:15:04.371 "aliases": [ 00:15:04.371 "287b85ad-e115-4971-b264-b23b86df03f5" 00:15:04.371 ], 00:15:04.371 "product_name": "Malloc disk", 00:15:04.371 "block_size": 512, 00:15:04.371 "num_blocks": 65536, 00:15:04.371 "uuid": "287b85ad-e115-4971-b264-b23b86df03f5", 00:15:04.371 "assigned_rate_limits": { 00:15:04.371 "rw_ios_per_sec": 0, 00:15:04.371 "rw_mbytes_per_sec": 0, 00:15:04.371 "r_mbytes_per_sec": 0, 00:15:04.371 "w_mbytes_per_sec": 0 00:15:04.371 }, 00:15:04.371 "claimed": false, 00:15:04.371 "zoned": false, 00:15:04.371 "supported_io_types": { 00:15:04.371 "read": true, 00:15:04.371 "write": true, 00:15:04.371 "unmap": true, 00:15:04.371 "write_zeroes": true, 00:15:04.371 "flush": true, 00:15:04.371 "reset": true, 00:15:04.371 "compare": false, 00:15:04.371 "compare_and_write": false, 00:15:04.371 "abort": true, 00:15:04.371 "nvme_admin": false, 00:15:04.371 "nvme_io": false 00:15:04.371 }, 00:15:04.371 "memory_domains": [ 00:15:04.371 { 00:15:04.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.371 "dma_device_type": 2 00:15:04.371 } 00:15:04.371 ], 00:15:04.371 "driver_specific": {} 00:15:04.371 } 00:15:04.371 ] 00:15:04.371 16:30:41 -- common/autotest_common.sh@895 -- # return 0 00:15:04.371 16:30:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:04.629 [2024-07-11 16:30:41.292571] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.629 [2024-07-11 16:30:41.294354] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.629 [2024-07-11 16:30:41.294530] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.629 16:30:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.887 16:30:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:04.887 "name": "Existed_Raid", 00:15:04.887 "uuid": "d0b23ece-8662-43d2-b25e-60c7be317546", 00:15:04.887 "strip_size_kb": 64, 00:15:04.887 "state": "configuring", 00:15:04.887 "raid_level": "raid0", 00:15:04.887 "superblock": true, 00:15:04.887 "num_base_bdevs": 2, 00:15:04.887 "num_base_bdevs_discovered": 1, 00:15:04.887 "num_base_bdevs_operational": 2, 00:15:04.887 "base_bdevs_list": [ 00:15:04.887 { 00:15:04.887 "name": "BaseBdev1", 00:15:04.887 "uuid": "287b85ad-e115-4971-b264-b23b86df03f5", 00:15:04.887 "is_configured": true, 00:15:04.887 "data_offset": 2048, 00:15:04.887 "data_size": 63488 00:15:04.887 }, 00:15:04.887 { 00:15:04.887 "name": "BaseBdev2", 00:15:04.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.887 "is_configured": false, 00:15:04.887 "data_offset": 0, 00:15:04.887 "data_size": 0 00:15:04.887 } 00:15:04.887 ] 00:15:04.887 }' 00:15:04.887 16:30:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:04.887 16:30:41 -- common/autotest_common.sh@10 -- # set +x 00:15:05.453 16:30:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:05.712 [2024-07-11 16:30:42.431497] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.712 [2024-07-11 16:30:42.431919] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:05.712 BaseBdev2 00:15:05.712 [2024-07-11 16:30:42.432572] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:05.712 [2024-07-11 16:30:42.432833] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:05.712 [2024-07-11 16:30:42.433501] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:05.712 [2024-07-11 16:30:42.433632] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:05.712 [2024-07-11 16:30:42.433849] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.712 16:30:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:05.712 16:30:42 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:05.712 16:30:42 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:05.712 16:30:42 -- common/autotest_common.sh@889 -- # local i 00:15:05.712 16:30:42 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:05.712 16:30:42 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:05.712 16:30:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.971 16:30:42 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:06.229 [ 00:15:06.229 { 00:15:06.229 "name": "BaseBdev2", 00:15:06.229 "aliases": [ 00:15:06.229 "d4e66fc9-80ce-41f1-bc04-76cd75980f5a" 00:15:06.229 ], 00:15:06.229 "product_name": "Malloc disk", 00:15:06.229 "block_size": 512, 00:15:06.229 "num_blocks": 65536, 00:15:06.229 "uuid": "d4e66fc9-80ce-41f1-bc04-76cd75980f5a", 00:15:06.229 "assigned_rate_limits": { 00:15:06.229 "rw_ios_per_sec": 0, 00:15:06.229 "rw_mbytes_per_sec": 0, 00:15:06.229 "r_mbytes_per_sec": 0, 00:15:06.229 "w_mbytes_per_sec": 0 00:15:06.229 }, 00:15:06.229 "claimed": true, 00:15:06.229 "claim_type": "exclusive_write", 00:15:06.229 "zoned": false, 00:15:06.229 "supported_io_types": { 00:15:06.229 "read": true, 00:15:06.229 "write": true, 00:15:06.229 "unmap": true, 00:15:06.229 "write_zeroes": true, 00:15:06.229 "flush": true, 00:15:06.229 "reset": true, 00:15:06.229 "compare": false, 00:15:06.229 "compare_and_write": false, 00:15:06.229 "abort": true, 00:15:06.229 "nvme_admin": false, 00:15:06.229 "nvme_io": false 00:15:06.229 }, 00:15:06.229 "memory_domains": [ 00:15:06.229 { 00:15:06.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.229 "dma_device_type": 2 00:15:06.229 } 00:15:06.229 ], 00:15:06.229 "driver_specific": {} 00:15:06.229 } 00:15:06.229 ] 00:15:06.229 16:30:42 -- common/autotest_common.sh@895 -- # return 0 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.229 16:30:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.486 16:30:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.486 "name": "Existed_Raid", 00:15:06.486 "uuid": "d0b23ece-8662-43d2-b25e-60c7be317546", 00:15:06.486 "strip_size_kb": 64, 00:15:06.486 "state": "online", 00:15:06.486 "raid_level": "raid0", 00:15:06.486 "superblock": true, 00:15:06.486 "num_base_bdevs": 2, 00:15:06.486 "num_base_bdevs_discovered": 2, 00:15:06.486 "num_base_bdevs_operational": 2, 00:15:06.486 "base_bdevs_list": [ 00:15:06.486 { 00:15:06.486 "name": "BaseBdev1", 00:15:06.486 "uuid": "287b85ad-e115-4971-b264-b23b86df03f5", 00:15:06.486 "is_configured": true, 00:15:06.486 "data_offset": 2048, 00:15:06.486 "data_size": 63488 00:15:06.486 }, 00:15:06.486 { 00:15:06.486 "name": "BaseBdev2", 00:15:06.486 "uuid": "d4e66fc9-80ce-41f1-bc04-76cd75980f5a", 00:15:06.486 "is_configured": true, 00:15:06.486 "data_offset": 2048, 00:15:06.486 "data_size": 63488 00:15:06.486 } 00:15:06.486 ] 00:15:06.486 }' 00:15:06.486 16:30:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.486 16:30:43 -- common/autotest_common.sh@10 -- # set +x 00:15:07.051 16:30:43 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:07.309 [2024-07-11 16:30:43.875829] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:07.309 [2024-07-11 16:30:43.876018] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.309 [2024-07-11 16:30:43.876195] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.309 16:30:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.567 16:30:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:07.567 "name": "Existed_Raid", 00:15:07.567 "uuid": "d0b23ece-8662-43d2-b25e-60c7be317546", 00:15:07.567 "strip_size_kb": 64, 00:15:07.567 "state": "offline", 00:15:07.567 "raid_level": "raid0", 00:15:07.567 "superblock": true, 00:15:07.567 "num_base_bdevs": 2, 00:15:07.567 "num_base_bdevs_discovered": 1, 00:15:07.567 "num_base_bdevs_operational": 1, 00:15:07.567 "base_bdevs_list": [ 00:15:07.567 { 00:15:07.567 "name": null, 00:15:07.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.567 "is_configured": false, 00:15:07.567 "data_offset": 2048, 00:15:07.567 "data_size": 63488 00:15:07.567 }, 00:15:07.567 { 00:15:07.567 "name": "BaseBdev2", 00:15:07.567 "uuid": "d4e66fc9-80ce-41f1-bc04-76cd75980f5a", 00:15:07.567 "is_configured": true, 00:15:07.567 "data_offset": 2048, 00:15:07.567 "data_size": 63488 00:15:07.567 } 00:15:07.567 ] 00:15:07.567 }' 00:15:07.567 16:30:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:07.567 16:30:44 -- common/autotest_common.sh@10 -- # set +x 00:15:08.133 16:30:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:08.133 16:30:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:08.133 16:30:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.133 16:30:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:08.391 16:30:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:08.391 16:30:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:08.391 16:30:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:08.649 [2024-07-11 16:30:45.367127] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:08.649 [2024-07-11 16:30:45.367370] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:08.649 16:30:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:08.649 16:30:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:08.649 16:30:45 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.649 16:30:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:08.907 16:30:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:08.907 16:30:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:08.907 16:30:45 -- bdev/bdev_raid.sh@287 -- # killprocess 115104 00:15:08.907 16:30:45 -- common/autotest_common.sh@926 -- # '[' -z 115104 ']' 00:15:08.907 16:30:45 -- common/autotest_common.sh@930 -- # kill -0 115104 00:15:08.907 16:30:45 -- common/autotest_common.sh@931 -- # uname 00:15:08.907 16:30:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:08.907 16:30:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115104 00:15:08.907 killing process with pid 115104 00:15:08.907 16:30:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:08.907 16:30:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:08.907 16:30:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115104' 00:15:08.907 16:30:45 -- common/autotest_common.sh@945 -- # kill 115104 00:15:08.907 16:30:45 -- common/autotest_common.sh@950 -- # wait 115104 00:15:09.165 [2024-07-11 16:30:45.715870] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.165 [2024-07-11 16:30:45.715999] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:10.100 ************************************ 00:15:10.100 END TEST raid_state_function_test_sb 00:15:10.100 ************************************ 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:10.100 00:15:10.100 real 0m10.464s 00:15:10.100 user 0m18.502s 00:15:10.100 sys 0m1.127s 00:15:10.100 16:30:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.100 16:30:46 -- common/autotest_common.sh@10 -- # set +x 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:10.100 16:30:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:10.100 16:30:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:10.100 16:30:46 -- common/autotest_common.sh@10 -- # set +x 00:15:10.100 ************************************ 00:15:10.100 START TEST raid_superblock_test 00:15:10.100 ************************************ 00:15:10.100 16:30:46 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@357 -- # raid_pid=115446 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@358 -- # waitforlisten 115446 /var/tmp/spdk-raid.sock 00:15:10.100 16:30:46 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:10.100 16:30:46 -- common/autotest_common.sh@819 -- # '[' -z 115446 ']' 00:15:10.100 16:30:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:10.100 16:30:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:10.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:10.100 16:30:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:10.100 16:30:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:10.100 16:30:46 -- common/autotest_common.sh@10 -- # set +x 00:15:10.100 [2024-07-11 16:30:46.742275] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:10.101 [2024-07-11 16:30:46.742457] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115446 ] 00:15:10.360 [2024-07-11 16:30:46.910879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.360 [2024-07-11 16:30:47.106544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.618 [2024-07-11 16:30:47.267951] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.877 16:30:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:10.877 16:30:47 -- common/autotest_common.sh@852 -- # return 0 00:15:10.877 16:30:47 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:10.877 16:30:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:10.877 16:30:47 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:10.877 16:30:47 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:10.877 16:30:47 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:10.877 16:30:47 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:10.877 16:30:47 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:10.877 16:30:47 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:10.877 16:30:47 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:11.136 malloc1 00:15:11.136 16:30:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:11.395 [2024-07-11 16:30:48.083914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:11.395 [2024-07-11 16:30:48.084012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.395 [2024-07-11 16:30:48.084042] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:11.395 [2024-07-11 16:30:48.084084] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.395 [2024-07-11 16:30:48.086088] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.395 [2024-07-11 16:30:48.086132] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:11.395 pt1 00:15:11.395 16:30:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:11.395 16:30:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:11.395 16:30:48 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:11.395 16:30:48 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:11.395 16:30:48 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:11.395 16:30:48 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:11.395 16:30:48 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:11.395 16:30:48 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:11.395 16:30:48 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:11.653 malloc2 00:15:11.654 16:30:48 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:11.913 [2024-07-11 16:30:48.535117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:11.913 [2024-07-11 16:30:48.535200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.913 [2024-07-11 16:30:48.535238] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:11.913 [2024-07-11 16:30:48.535285] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.913 [2024-07-11 16:30:48.537183] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.913 [2024-07-11 16:30:48.537246] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:11.913 pt2 00:15:11.913 16:30:48 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:11.913 16:30:48 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:11.913 16:30:48 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:11.913 [2024-07-11 16:30:48.719206] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:11.913 [2024-07-11 16:30:48.720833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.913 [2024-07-11 16:30:48.721065] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:11.913 [2024-07-11 16:30:48.721080] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:11.913 [2024-07-11 16:30:48.721212] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:12.171 [2024-07-11 16:30:48.721550] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:12.171 [2024-07-11 16:30:48.721569] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:12.171 [2024-07-11 16:30:48.721732] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:12.171 "name": "raid_bdev1", 00:15:12.171 "uuid": "02857a6d-4bcc-4afc-80bc-d020844d5d23", 00:15:12.171 "strip_size_kb": 64, 00:15:12.171 "state": "online", 00:15:12.171 "raid_level": "raid0", 00:15:12.171 "superblock": true, 00:15:12.171 "num_base_bdevs": 2, 00:15:12.171 "num_base_bdevs_discovered": 2, 00:15:12.171 "num_base_bdevs_operational": 2, 00:15:12.171 "base_bdevs_list": [ 00:15:12.171 { 00:15:12.171 "name": "pt1", 00:15:12.171 "uuid": "d5c20519-d106-5edb-8b70-28e61d31b682", 00:15:12.171 "is_configured": true, 00:15:12.171 "data_offset": 2048, 00:15:12.171 "data_size": 63488 00:15:12.171 }, 00:15:12.171 { 00:15:12.171 "name": "pt2", 00:15:12.171 "uuid": "ae530ed5-a074-5fae-a3d9-4aec0f9dedcb", 00:15:12.171 "is_configured": true, 00:15:12.171 "data_offset": 2048, 00:15:12.171 "data_size": 63488 00:15:12.171 } 00:15:12.171 ] 00:15:12.171 }' 00:15:12.171 16:30:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:12.171 16:30:48 -- common/autotest_common.sh@10 -- # set +x 00:15:13.106 16:30:49 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:13.106 16:30:49 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:13.106 [2024-07-11 16:30:49.775495] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.106 16:30:49 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=02857a6d-4bcc-4afc-80bc-d020844d5d23 00:15:13.106 16:30:49 -- bdev/bdev_raid.sh@380 -- # '[' -z 02857a6d-4bcc-4afc-80bc-d020844d5d23 ']' 00:15:13.106 16:30:49 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:13.365 [2024-07-11 16:30:50.031370] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.365 [2024-07-11 16:30:50.031504] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.365 [2024-07-11 16:30:50.031658] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.365 [2024-07-11 16:30:50.031849] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.365 [2024-07-11 16:30:50.031942] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:13.365 16:30:50 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.365 16:30:50 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:13.624 16:30:50 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:13.624 16:30:50 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:13.624 16:30:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:13.624 16:30:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:13.882 16:30:50 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:13.882 16:30:50 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:13.882 16:30:50 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:13.882 16:30:50 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:14.141 16:30:50 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:14.141 16:30:50 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:14.141 16:30:50 -- common/autotest_common.sh@640 -- # local es=0 00:15:14.141 16:30:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:14.141 16:30:50 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.141 16:30:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:14.141 16:30:50 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.141 16:30:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:14.141 16:30:50 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.141 16:30:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:14.141 16:30:50 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:14.141 16:30:50 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:14.141 16:30:50 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:14.399 [2024-07-11 16:30:50.991514] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:14.399 [2024-07-11 16:30:50.993272] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:14.399 [2024-07-11 16:30:50.993471] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:14.399 [2024-07-11 16:30:50.993643] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:14.399 [2024-07-11 16:30:50.993708] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:14.399 [2024-07-11 16:30:50.993808] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:15:14.399 request: 00:15:14.399 { 00:15:14.399 "name": "raid_bdev1", 00:15:14.399 "raid_level": "raid0", 00:15:14.399 "base_bdevs": [ 00:15:14.399 "malloc1", 00:15:14.399 "malloc2" 00:15:14.399 ], 00:15:14.399 "superblock": false, 00:15:14.399 "strip_size_kb": 64, 00:15:14.399 "method": "bdev_raid_create", 00:15:14.399 "req_id": 1 00:15:14.399 } 00:15:14.399 Got JSON-RPC error response 00:15:14.399 response: 00:15:14.399 { 00:15:14.399 "code": -17, 00:15:14.399 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:14.399 } 00:15:14.399 16:30:50 -- common/autotest_common.sh@643 -- # es=1 00:15:14.399 16:30:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:14.399 16:30:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:14.399 16:30:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:14.399 16:30:51 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.399 16:30:51 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:14.658 [2024-07-11 16:30:51.399534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:14.658 [2024-07-11 16:30:51.399736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.658 [2024-07-11 16:30:51.399799] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:14.658 [2024-07-11 16:30:51.399911] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.658 [2024-07-11 16:30:51.401890] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.658 [2024-07-11 16:30:51.402065] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:14.658 [2024-07-11 16:30:51.402263] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:14.658 [2024-07-11 16:30:51.402406] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:14.658 pt1 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.658 16:30:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.916 16:30:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.916 "name": "raid_bdev1", 00:15:14.916 "uuid": "02857a6d-4bcc-4afc-80bc-d020844d5d23", 00:15:14.916 "strip_size_kb": 64, 00:15:14.916 "state": "configuring", 00:15:14.916 "raid_level": "raid0", 00:15:14.916 "superblock": true, 00:15:14.916 "num_base_bdevs": 2, 00:15:14.916 "num_base_bdevs_discovered": 1, 00:15:14.916 "num_base_bdevs_operational": 2, 00:15:14.916 "base_bdevs_list": [ 00:15:14.916 { 00:15:14.916 "name": "pt1", 00:15:14.916 "uuid": "d5c20519-d106-5edb-8b70-28e61d31b682", 00:15:14.916 "is_configured": true, 00:15:14.916 "data_offset": 2048, 00:15:14.916 "data_size": 63488 00:15:14.916 }, 00:15:14.916 { 00:15:14.916 "name": null, 00:15:14.916 "uuid": "ae530ed5-a074-5fae-a3d9-4aec0f9dedcb", 00:15:14.916 "is_configured": false, 00:15:14.916 "data_offset": 2048, 00:15:14.916 "data_size": 63488 00:15:14.916 } 00:15:14.916 ] 00:15:14.916 }' 00:15:14.916 16:30:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.916 16:30:51 -- common/autotest_common.sh@10 -- # set +x 00:15:15.482 16:30:52 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:15.482 16:30:52 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:15.482 16:30:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:15.482 16:30:52 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:15.741 [2024-07-11 16:30:52.439759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:15.741 [2024-07-11 16:30:52.440021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.741 [2024-07-11 16:30:52.440087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:15.741 [2024-07-11 16:30:52.440329] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.741 [2024-07-11 16:30:52.440797] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.741 [2024-07-11 16:30:52.440986] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:15.741 [2024-07-11 16:30:52.441207] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:15.741 [2024-07-11 16:30:52.441339] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:15.741 [2024-07-11 16:30:52.441522] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:15:15.741 [2024-07-11 16:30:52.441630] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:15.741 [2024-07-11 16:30:52.441775] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:15.741 [2024-07-11 16:30:52.442089] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:15:15.741 [2024-07-11 16:30:52.442193] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:15:15.741 [2024-07-11 16:30:52.442395] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.741 pt2 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.741 16:30:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.999 16:30:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:15.999 "name": "raid_bdev1", 00:15:15.999 "uuid": "02857a6d-4bcc-4afc-80bc-d020844d5d23", 00:15:15.999 "strip_size_kb": 64, 00:15:15.999 "state": "online", 00:15:15.999 "raid_level": "raid0", 00:15:15.999 "superblock": true, 00:15:15.999 "num_base_bdevs": 2, 00:15:15.999 "num_base_bdevs_discovered": 2, 00:15:15.999 "num_base_bdevs_operational": 2, 00:15:15.999 "base_bdevs_list": [ 00:15:15.999 { 00:15:15.999 "name": "pt1", 00:15:15.999 "uuid": "d5c20519-d106-5edb-8b70-28e61d31b682", 00:15:15.999 "is_configured": true, 00:15:15.999 "data_offset": 2048, 00:15:15.999 "data_size": 63488 00:15:15.999 }, 00:15:15.999 { 00:15:15.999 "name": "pt2", 00:15:15.999 "uuid": "ae530ed5-a074-5fae-a3d9-4aec0f9dedcb", 00:15:15.999 "is_configured": true, 00:15:15.999 "data_offset": 2048, 00:15:15.999 "data_size": 63488 00:15:15.999 } 00:15:15.999 ] 00:15:15.999 }' 00:15:15.999 16:30:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:15.999 16:30:52 -- common/autotest_common.sh@10 -- # set +x 00:15:16.564 16:30:53 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:16.564 16:30:53 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:16.823 [2024-07-11 16:30:53.537519] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.823 16:30:53 -- bdev/bdev_raid.sh@430 -- # '[' 02857a6d-4bcc-4afc-80bc-d020844d5d23 '!=' 02857a6d-4bcc-4afc-80bc-d020844d5d23 ']' 00:15:16.823 16:30:53 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:16.823 16:30:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:16.823 16:30:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:16.823 16:30:53 -- bdev/bdev_raid.sh@511 -- # killprocess 115446 00:15:16.823 16:30:53 -- common/autotest_common.sh@926 -- # '[' -z 115446 ']' 00:15:16.823 16:30:53 -- common/autotest_common.sh@930 -- # kill -0 115446 00:15:16.823 16:30:53 -- common/autotest_common.sh@931 -- # uname 00:15:16.823 16:30:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:16.823 16:30:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115446 00:15:16.823 killing process with pid 115446 00:15:16.823 16:30:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:16.823 16:30:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:16.823 16:30:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115446' 00:15:16.823 16:30:53 -- common/autotest_common.sh@945 -- # kill 115446 00:15:16.823 16:30:53 -- common/autotest_common.sh@950 -- # wait 115446 00:15:16.823 [2024-07-11 16:30:53.574747] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.823 [2024-07-11 16:30:53.574850] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.823 [2024-07-11 16:30:53.574934] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.823 [2024-07-11 16:30:53.574975] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:15:17.081 [2024-07-11 16:30:53.700268] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.015 ************************************ 00:15:18.015 END TEST raid_superblock_test 00:15:18.015 ************************************ 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:18.015 00:15:18.015 real 0m7.921s 00:15:18.015 user 0m13.733s 00:15:18.015 sys 0m0.852s 00:15:18.015 16:30:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:18.015 16:30:54 -- common/autotest_common.sh@10 -- # set +x 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:18.015 16:30:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:18.015 16:30:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:18.015 16:30:54 -- common/autotest_common.sh@10 -- # set +x 00:15:18.015 ************************************ 00:15:18.015 START TEST raid_state_function_test 00:15:18.015 ************************************ 00:15:18.015 16:30:54 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:18.015 16:30:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:18.016 Process raid pid: 115714 00:15:18.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=115714 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 115714' 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 115714 /var/tmp/spdk-raid.sock 00:15:18.016 16:30:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:18.016 16:30:54 -- common/autotest_common.sh@819 -- # '[' -z 115714 ']' 00:15:18.016 16:30:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:18.016 16:30:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:18.016 16:30:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:18.016 16:30:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:18.016 16:30:54 -- common/autotest_common.sh@10 -- # set +x 00:15:18.016 [2024-07-11 16:30:54.717526] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:18.016 [2024-07-11 16:30:54.718868] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.273 [2024-07-11 16:30:54.889745] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.531 [2024-07-11 16:30:55.090618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.531 [2024-07-11 16:30:55.255370] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.097 16:30:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:19.097 16:30:55 -- common/autotest_common.sh@852 -- # return 0 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:19.097 [2024-07-11 16:30:55.888518] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.097 [2024-07-11 16:30:55.888722] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.097 [2024-07-11 16:30:55.888822] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.097 [2024-07-11 16:30:55.888877] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.097 16:30:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.354 16:30:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.355 "name": "Existed_Raid", 00:15:19.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.355 "strip_size_kb": 64, 00:15:19.355 "state": "configuring", 00:15:19.355 "raid_level": "concat", 00:15:19.355 "superblock": false, 00:15:19.355 "num_base_bdevs": 2, 00:15:19.355 "num_base_bdevs_discovered": 0, 00:15:19.355 "num_base_bdevs_operational": 2, 00:15:19.355 "base_bdevs_list": [ 00:15:19.355 { 00:15:19.355 "name": "BaseBdev1", 00:15:19.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.355 "is_configured": false, 00:15:19.355 "data_offset": 0, 00:15:19.355 "data_size": 0 00:15:19.355 }, 00:15:19.355 { 00:15:19.355 "name": "BaseBdev2", 00:15:19.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.355 "is_configured": false, 00:15:19.355 "data_offset": 0, 00:15:19.355 "data_size": 0 00:15:19.355 } 00:15:19.355 ] 00:15:19.355 }' 00:15:19.355 16:30:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.355 16:30:56 -- common/autotest_common.sh@10 -- # set +x 00:15:20.287 16:30:56 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:20.287 [2024-07-11 16:30:56.924612] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:20.287 [2024-07-11 16:30:56.924760] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:20.287 16:30:56 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:20.544 [2024-07-11 16:30:57.168668] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.544 [2024-07-11 16:30:57.168860] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.544 [2024-07-11 16:30:57.168989] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.544 [2024-07-11 16:30:57.169047] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.544 16:30:57 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:20.802 [2024-07-11 16:30:57.433751] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.802 BaseBdev1 00:15:20.802 16:30:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:20.802 16:30:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:20.802 16:30:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:20.802 16:30:57 -- common/autotest_common.sh@889 -- # local i 00:15:20.802 16:30:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:20.802 16:30:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:20.802 16:30:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:21.060 16:30:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:21.060 [ 00:15:21.060 { 00:15:21.060 "name": "BaseBdev1", 00:15:21.060 "aliases": [ 00:15:21.060 "a5fe3299-8e3f-4b7b-8977-7795c90ad9d3" 00:15:21.060 ], 00:15:21.060 "product_name": "Malloc disk", 00:15:21.060 "block_size": 512, 00:15:21.060 "num_blocks": 65536, 00:15:21.060 "uuid": "a5fe3299-8e3f-4b7b-8977-7795c90ad9d3", 00:15:21.060 "assigned_rate_limits": { 00:15:21.060 "rw_ios_per_sec": 0, 00:15:21.060 "rw_mbytes_per_sec": 0, 00:15:21.060 "r_mbytes_per_sec": 0, 00:15:21.060 "w_mbytes_per_sec": 0 00:15:21.060 }, 00:15:21.060 "claimed": true, 00:15:21.060 "claim_type": "exclusive_write", 00:15:21.060 "zoned": false, 00:15:21.060 "supported_io_types": { 00:15:21.060 "read": true, 00:15:21.060 "write": true, 00:15:21.060 "unmap": true, 00:15:21.060 "write_zeroes": true, 00:15:21.060 "flush": true, 00:15:21.060 "reset": true, 00:15:21.060 "compare": false, 00:15:21.060 "compare_and_write": false, 00:15:21.060 "abort": true, 00:15:21.060 "nvme_admin": false, 00:15:21.060 "nvme_io": false 00:15:21.060 }, 00:15:21.060 "memory_domains": [ 00:15:21.060 { 00:15:21.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.060 "dma_device_type": 2 00:15:21.060 } 00:15:21.060 ], 00:15:21.060 "driver_specific": {} 00:15:21.060 } 00:15:21.060 ] 00:15:21.318 16:30:57 -- common/autotest_common.sh@895 -- # return 0 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.318 16:30:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.318 16:30:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.318 "name": "Existed_Raid", 00:15:21.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.318 "strip_size_kb": 64, 00:15:21.318 "state": "configuring", 00:15:21.318 "raid_level": "concat", 00:15:21.318 "superblock": false, 00:15:21.318 "num_base_bdevs": 2, 00:15:21.318 "num_base_bdevs_discovered": 1, 00:15:21.318 "num_base_bdevs_operational": 2, 00:15:21.318 "base_bdevs_list": [ 00:15:21.318 { 00:15:21.318 "name": "BaseBdev1", 00:15:21.318 "uuid": "a5fe3299-8e3f-4b7b-8977-7795c90ad9d3", 00:15:21.318 "is_configured": true, 00:15:21.318 "data_offset": 0, 00:15:21.318 "data_size": 65536 00:15:21.318 }, 00:15:21.318 { 00:15:21.318 "name": "BaseBdev2", 00:15:21.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.318 "is_configured": false, 00:15:21.318 "data_offset": 0, 00:15:21.318 "data_size": 0 00:15:21.318 } 00:15:21.318 ] 00:15:21.318 }' 00:15:21.318 16:30:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.318 16:30:58 -- common/autotest_common.sh@10 -- # set +x 00:15:22.254 16:30:58 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:22.254 [2024-07-11 16:30:58.958607] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.254 [2024-07-11 16:30:58.958788] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:22.254 16:30:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:22.254 16:30:58 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:22.513 [2024-07-11 16:30:59.142671] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.513 [2024-07-11 16:30:59.144300] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:22.513 [2024-07-11 16:30:59.144453] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.513 16:30:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.772 16:30:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:22.772 "name": "Existed_Raid", 00:15:22.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.772 "strip_size_kb": 64, 00:15:22.772 "state": "configuring", 00:15:22.772 "raid_level": "concat", 00:15:22.772 "superblock": false, 00:15:22.772 "num_base_bdevs": 2, 00:15:22.772 "num_base_bdevs_discovered": 1, 00:15:22.772 "num_base_bdevs_operational": 2, 00:15:22.772 "base_bdevs_list": [ 00:15:22.772 { 00:15:22.772 "name": "BaseBdev1", 00:15:22.772 "uuid": "a5fe3299-8e3f-4b7b-8977-7795c90ad9d3", 00:15:22.772 "is_configured": true, 00:15:22.772 "data_offset": 0, 00:15:22.772 "data_size": 65536 00:15:22.772 }, 00:15:22.772 { 00:15:22.772 "name": "BaseBdev2", 00:15:22.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.772 "is_configured": false, 00:15:22.772 "data_offset": 0, 00:15:22.772 "data_size": 0 00:15:22.772 } 00:15:22.772 ] 00:15:22.772 }' 00:15:22.772 16:30:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:22.772 16:30:59 -- common/autotest_common.sh@10 -- # set +x 00:15:23.339 16:31:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:23.598 [2024-07-11 16:31:00.297948] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:23.598 [2024-07-11 16:31:00.298163] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:23.598 [2024-07-11 16:31:00.298210] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:23.598 [2024-07-11 16:31:00.298395] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:23.598 [2024-07-11 16:31:00.298860] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:23.598 [2024-07-11 16:31:00.299011] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:23.598 [2024-07-11 16:31:00.299356] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.598 BaseBdev2 00:15:23.598 16:31:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:23.598 16:31:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:23.598 16:31:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:23.598 16:31:00 -- common/autotest_common.sh@889 -- # local i 00:15:23.598 16:31:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:23.598 16:31:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:23.598 16:31:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:23.857 16:31:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.116 [ 00:15:24.116 { 00:15:24.116 "name": "BaseBdev2", 00:15:24.116 "aliases": [ 00:15:24.116 "30efa074-992b-47ed-88a0-310c5df4c0e6" 00:15:24.116 ], 00:15:24.116 "product_name": "Malloc disk", 00:15:24.116 "block_size": 512, 00:15:24.116 "num_blocks": 65536, 00:15:24.116 "uuid": "30efa074-992b-47ed-88a0-310c5df4c0e6", 00:15:24.116 "assigned_rate_limits": { 00:15:24.116 "rw_ios_per_sec": 0, 00:15:24.116 "rw_mbytes_per_sec": 0, 00:15:24.116 "r_mbytes_per_sec": 0, 00:15:24.116 "w_mbytes_per_sec": 0 00:15:24.116 }, 00:15:24.116 "claimed": true, 00:15:24.116 "claim_type": "exclusive_write", 00:15:24.116 "zoned": false, 00:15:24.116 "supported_io_types": { 00:15:24.116 "read": true, 00:15:24.116 "write": true, 00:15:24.116 "unmap": true, 00:15:24.116 "write_zeroes": true, 00:15:24.116 "flush": true, 00:15:24.116 "reset": true, 00:15:24.116 "compare": false, 00:15:24.116 "compare_and_write": false, 00:15:24.116 "abort": true, 00:15:24.116 "nvme_admin": false, 00:15:24.116 "nvme_io": false 00:15:24.116 }, 00:15:24.116 "memory_domains": [ 00:15:24.116 { 00:15:24.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.116 "dma_device_type": 2 00:15:24.116 } 00:15:24.116 ], 00:15:24.116 "driver_specific": {} 00:15:24.116 } 00:15:24.116 ] 00:15:24.116 16:31:00 -- common/autotest_common.sh@895 -- # return 0 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.116 16:31:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.375 16:31:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:24.375 "name": "Existed_Raid", 00:15:24.375 "uuid": "bdab6b9f-c2ee-4b12-afeb-a1405ca2aac4", 00:15:24.375 "strip_size_kb": 64, 00:15:24.375 "state": "online", 00:15:24.375 "raid_level": "concat", 00:15:24.375 "superblock": false, 00:15:24.375 "num_base_bdevs": 2, 00:15:24.375 "num_base_bdevs_discovered": 2, 00:15:24.375 "num_base_bdevs_operational": 2, 00:15:24.375 "base_bdevs_list": [ 00:15:24.375 { 00:15:24.375 "name": "BaseBdev1", 00:15:24.375 "uuid": "a5fe3299-8e3f-4b7b-8977-7795c90ad9d3", 00:15:24.375 "is_configured": true, 00:15:24.375 "data_offset": 0, 00:15:24.375 "data_size": 65536 00:15:24.375 }, 00:15:24.375 { 00:15:24.375 "name": "BaseBdev2", 00:15:24.375 "uuid": "30efa074-992b-47ed-88a0-310c5df4c0e6", 00:15:24.375 "is_configured": true, 00:15:24.375 "data_offset": 0, 00:15:24.375 "data_size": 65536 00:15:24.375 } 00:15:24.375 ] 00:15:24.375 }' 00:15:24.375 16:31:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:24.375 16:31:00 -- common/autotest_common.sh@10 -- # set +x 00:15:24.942 16:31:01 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:25.201 [2024-07-11 16:31:01.790298] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.201 [2024-07-11 16:31:01.790437] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:25.201 [2024-07-11 16:31:01.790584] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:25.201 16:31:01 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:25.201 16:31:01 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:25.201 16:31:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:25.201 16:31:01 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:25.201 16:31:01 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.202 16:31:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.460 16:31:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.460 "name": "Existed_Raid", 00:15:25.460 "uuid": "bdab6b9f-c2ee-4b12-afeb-a1405ca2aac4", 00:15:25.460 "strip_size_kb": 64, 00:15:25.460 "state": "offline", 00:15:25.460 "raid_level": "concat", 00:15:25.460 "superblock": false, 00:15:25.460 "num_base_bdevs": 2, 00:15:25.460 "num_base_bdevs_discovered": 1, 00:15:25.460 "num_base_bdevs_operational": 1, 00:15:25.460 "base_bdevs_list": [ 00:15:25.460 { 00:15:25.460 "name": null, 00:15:25.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.460 "is_configured": false, 00:15:25.460 "data_offset": 0, 00:15:25.460 "data_size": 65536 00:15:25.460 }, 00:15:25.460 { 00:15:25.460 "name": "BaseBdev2", 00:15:25.460 "uuid": "30efa074-992b-47ed-88a0-310c5df4c0e6", 00:15:25.460 "is_configured": true, 00:15:25.461 "data_offset": 0, 00:15:25.461 "data_size": 65536 00:15:25.461 } 00:15:25.461 ] 00:15:25.461 }' 00:15:25.461 16:31:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.461 16:31:02 -- common/autotest_common.sh@10 -- # set +x 00:15:26.028 16:31:02 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:26.029 16:31:02 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:26.029 16:31:02 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.029 16:31:02 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:26.287 16:31:03 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:26.287 16:31:03 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:26.287 16:31:03 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:26.547 [2024-07-11 16:31:03.245658] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:26.547 [2024-07-11 16:31:03.245829] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:26.547 16:31:03 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:26.547 16:31:03 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:26.547 16:31:03 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:26.547 16:31:03 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.808 16:31:03 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:26.808 16:31:03 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:26.808 16:31:03 -- bdev/bdev_raid.sh@287 -- # killprocess 115714 00:15:26.808 16:31:03 -- common/autotest_common.sh@926 -- # '[' -z 115714 ']' 00:15:26.808 16:31:03 -- common/autotest_common.sh@930 -- # kill -0 115714 00:15:26.808 16:31:03 -- common/autotest_common.sh@931 -- # uname 00:15:26.808 16:31:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:26.808 16:31:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 115714 00:15:26.808 killing process with pid 115714 00:15:26.808 16:31:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:26.808 16:31:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:26.808 16:31:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 115714' 00:15:26.808 16:31:03 -- common/autotest_common.sh@945 -- # kill 115714 00:15:26.808 16:31:03 -- common/autotest_common.sh@950 -- # wait 115714 00:15:26.808 [2024-07-11 16:31:03.598458] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.808 [2024-07-11 16:31:03.598561] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:27.742 ************************************ 00:15:27.742 END TEST raid_state_function_test 00:15:27.742 ************************************ 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:27.742 00:15:27.742 real 0m9.843s 00:15:27.742 user 0m17.533s 00:15:27.742 sys 0m0.969s 00:15:27.742 16:31:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.742 16:31:04 -- common/autotest_common.sh@10 -- # set +x 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:27.742 16:31:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:27.742 16:31:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:27.742 16:31:04 -- common/autotest_common.sh@10 -- # set +x 00:15:27.742 ************************************ 00:15:27.742 START TEST raid_state_function_test_sb 00:15:27.742 ************************************ 00:15:27.742 16:31:04 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:27.742 16:31:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:28.001 Process raid pid: 116048 00:15:28.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=116048 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116048' 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:28.001 16:31:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116048 /var/tmp/spdk-raid.sock 00:15:28.001 16:31:04 -- common/autotest_common.sh@819 -- # '[' -z 116048 ']' 00:15:28.001 16:31:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:28.001 16:31:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:28.001 16:31:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:28.001 16:31:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:28.001 16:31:04 -- common/autotest_common.sh@10 -- # set +x 00:15:28.001 [2024-07-11 16:31:04.608529] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:28.001 [2024-07-11 16:31:04.608914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.001 [2024-07-11 16:31:04.771829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.259 [2024-07-11 16:31:04.927894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.517 [2024-07-11 16:31:05.092671] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:28.776 16:31:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:28.776 16:31:05 -- common/autotest_common.sh@852 -- # return 0 00:15:28.776 16:31:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:29.034 [2024-07-11 16:31:05.675190] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.034 [2024-07-11 16:31:05.675416] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.034 [2024-07-11 16:31:05.675518] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.034 [2024-07-11 16:31:05.675575] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.034 16:31:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.293 16:31:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:29.293 "name": "Existed_Raid", 00:15:29.293 "uuid": "0b3a397b-bbb3-45be-87ce-237b8bc4d032", 00:15:29.293 "strip_size_kb": 64, 00:15:29.293 "state": "configuring", 00:15:29.293 "raid_level": "concat", 00:15:29.293 "superblock": true, 00:15:29.293 "num_base_bdevs": 2, 00:15:29.293 "num_base_bdevs_discovered": 0, 00:15:29.293 "num_base_bdevs_operational": 2, 00:15:29.293 "base_bdevs_list": [ 00:15:29.293 { 00:15:29.293 "name": "BaseBdev1", 00:15:29.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.293 "is_configured": false, 00:15:29.293 "data_offset": 0, 00:15:29.293 "data_size": 0 00:15:29.293 }, 00:15:29.293 { 00:15:29.293 "name": "BaseBdev2", 00:15:29.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.293 "is_configured": false, 00:15:29.293 "data_offset": 0, 00:15:29.293 "data_size": 0 00:15:29.293 } 00:15:29.293 ] 00:15:29.293 }' 00:15:29.293 16:31:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:29.293 16:31:05 -- common/autotest_common.sh@10 -- # set +x 00:15:29.861 16:31:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:30.120 [2024-07-11 16:31:06.679234] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:30.120 [2024-07-11 16:31:06.679428] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:30.120 16:31:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:30.379 [2024-07-11 16:31:06.931381] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.379 [2024-07-11 16:31:06.931613] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.379 [2024-07-11 16:31:06.931712] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.379 [2024-07-11 16:31:06.931770] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.379 16:31:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.379 [2024-07-11 16:31:07.136254] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.379 BaseBdev1 00:15:30.379 16:31:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:30.380 16:31:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:30.380 16:31:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:30.380 16:31:07 -- common/autotest_common.sh@889 -- # local i 00:15:30.380 16:31:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:30.380 16:31:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:30.380 16:31:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.638 16:31:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.896 [ 00:15:30.896 { 00:15:30.896 "name": "BaseBdev1", 00:15:30.896 "aliases": [ 00:15:30.896 "744aff02-48b2-4b7a-b831-432a8d2cfdc9" 00:15:30.896 ], 00:15:30.896 "product_name": "Malloc disk", 00:15:30.896 "block_size": 512, 00:15:30.896 "num_blocks": 65536, 00:15:30.896 "uuid": "744aff02-48b2-4b7a-b831-432a8d2cfdc9", 00:15:30.896 "assigned_rate_limits": { 00:15:30.896 "rw_ios_per_sec": 0, 00:15:30.896 "rw_mbytes_per_sec": 0, 00:15:30.896 "r_mbytes_per_sec": 0, 00:15:30.896 "w_mbytes_per_sec": 0 00:15:30.896 }, 00:15:30.896 "claimed": true, 00:15:30.896 "claim_type": "exclusive_write", 00:15:30.896 "zoned": false, 00:15:30.896 "supported_io_types": { 00:15:30.896 "read": true, 00:15:30.896 "write": true, 00:15:30.896 "unmap": true, 00:15:30.896 "write_zeroes": true, 00:15:30.896 "flush": true, 00:15:30.896 "reset": true, 00:15:30.896 "compare": false, 00:15:30.896 "compare_and_write": false, 00:15:30.896 "abort": true, 00:15:30.896 "nvme_admin": false, 00:15:30.896 "nvme_io": false 00:15:30.896 }, 00:15:30.896 "memory_domains": [ 00:15:30.896 { 00:15:30.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.896 "dma_device_type": 2 00:15:30.896 } 00:15:30.896 ], 00:15:30.896 "driver_specific": {} 00:15:30.896 } 00:15:30.896 ] 00:15:30.897 16:31:07 -- common/autotest_common.sh@895 -- # return 0 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.897 16:31:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.155 16:31:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.155 "name": "Existed_Raid", 00:15:31.155 "uuid": "8ae33ec7-326c-4638-b675-212a9109d942", 00:15:31.155 "strip_size_kb": 64, 00:15:31.155 "state": "configuring", 00:15:31.155 "raid_level": "concat", 00:15:31.155 "superblock": true, 00:15:31.155 "num_base_bdevs": 2, 00:15:31.155 "num_base_bdevs_discovered": 1, 00:15:31.155 "num_base_bdevs_operational": 2, 00:15:31.155 "base_bdevs_list": [ 00:15:31.155 { 00:15:31.155 "name": "BaseBdev1", 00:15:31.155 "uuid": "744aff02-48b2-4b7a-b831-432a8d2cfdc9", 00:15:31.155 "is_configured": true, 00:15:31.155 "data_offset": 2048, 00:15:31.155 "data_size": 63488 00:15:31.155 }, 00:15:31.155 { 00:15:31.155 "name": "BaseBdev2", 00:15:31.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.155 "is_configured": false, 00:15:31.155 "data_offset": 0, 00:15:31.155 "data_size": 0 00:15:31.155 } 00:15:31.155 ] 00:15:31.155 }' 00:15:31.155 16:31:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.155 16:31:07 -- common/autotest_common.sh@10 -- # set +x 00:15:31.723 16:31:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:31.981 [2024-07-11 16:31:08.620523] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.981 [2024-07-11 16:31:08.620674] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:31.981 16:31:08 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:31.981 16:31:08 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:32.240 16:31:08 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:32.499 BaseBdev1 00:15:32.499 16:31:09 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:32.499 16:31:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:32.499 16:31:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:32.499 16:31:09 -- common/autotest_common.sh@889 -- # local i 00:15:32.499 16:31:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:32.499 16:31:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:32.499 16:31:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:32.757 16:31:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:32.757 [ 00:15:32.757 { 00:15:32.757 "name": "BaseBdev1", 00:15:32.757 "aliases": [ 00:15:32.757 "d11bb71b-28d4-431e-a7c5-d62f3257036d" 00:15:32.757 ], 00:15:32.757 "product_name": "Malloc disk", 00:15:32.757 "block_size": 512, 00:15:32.757 "num_blocks": 65536, 00:15:32.757 "uuid": "d11bb71b-28d4-431e-a7c5-d62f3257036d", 00:15:32.757 "assigned_rate_limits": { 00:15:32.757 "rw_ios_per_sec": 0, 00:15:32.757 "rw_mbytes_per_sec": 0, 00:15:32.757 "r_mbytes_per_sec": 0, 00:15:32.757 "w_mbytes_per_sec": 0 00:15:32.757 }, 00:15:32.757 "claimed": false, 00:15:32.757 "zoned": false, 00:15:32.757 "supported_io_types": { 00:15:32.757 "read": true, 00:15:32.757 "write": true, 00:15:32.757 "unmap": true, 00:15:32.757 "write_zeroes": true, 00:15:32.757 "flush": true, 00:15:32.757 "reset": true, 00:15:32.757 "compare": false, 00:15:32.757 "compare_and_write": false, 00:15:32.757 "abort": true, 00:15:32.757 "nvme_admin": false, 00:15:32.757 "nvme_io": false 00:15:32.757 }, 00:15:32.757 "memory_domains": [ 00:15:32.757 { 00:15:32.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.757 "dma_device_type": 2 00:15:32.757 } 00:15:32.757 ], 00:15:32.757 "driver_specific": {} 00:15:32.757 } 00:15:32.757 ] 00:15:32.757 16:31:09 -- common/autotest_common.sh@895 -- # return 0 00:15:32.757 16:31:09 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:33.015 [2024-07-11 16:31:09.648454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.015 [2024-07-11 16:31:09.650107] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.015 [2024-07-11 16:31:09.650291] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.015 16:31:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.274 16:31:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.274 "name": "Existed_Raid", 00:15:33.274 "uuid": "3576f6e9-1960-4f65-9eb4-dae0d4d81732", 00:15:33.274 "strip_size_kb": 64, 00:15:33.274 "state": "configuring", 00:15:33.274 "raid_level": "concat", 00:15:33.274 "superblock": true, 00:15:33.274 "num_base_bdevs": 2, 00:15:33.274 "num_base_bdevs_discovered": 1, 00:15:33.274 "num_base_bdevs_operational": 2, 00:15:33.274 "base_bdevs_list": [ 00:15:33.274 { 00:15:33.274 "name": "BaseBdev1", 00:15:33.274 "uuid": "d11bb71b-28d4-431e-a7c5-d62f3257036d", 00:15:33.274 "is_configured": true, 00:15:33.274 "data_offset": 2048, 00:15:33.274 "data_size": 63488 00:15:33.274 }, 00:15:33.274 { 00:15:33.274 "name": "BaseBdev2", 00:15:33.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.274 "is_configured": false, 00:15:33.274 "data_offset": 0, 00:15:33.274 "data_size": 0 00:15:33.274 } 00:15:33.274 ] 00:15:33.274 }' 00:15:33.274 16:31:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.274 16:31:09 -- common/autotest_common.sh@10 -- # set +x 00:15:33.842 16:31:10 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:34.102 [2024-07-11 16:31:10.725155] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.102 [2024-07-11 16:31:10.725590] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:34.102 [2024-07-11 16:31:10.725706] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:34.102 BaseBdev2 00:15:34.102 [2024-07-11 16:31:10.725871] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:34.102 [2024-07-11 16:31:10.726300] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:34.102 [2024-07-11 16:31:10.726427] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:34.102 [2024-07-11 16:31:10.726661] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.102 16:31:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:34.102 16:31:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:34.102 16:31:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:34.102 16:31:10 -- common/autotest_common.sh@889 -- # local i 00:15:34.102 16:31:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:34.102 16:31:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:34.102 16:31:10 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:34.360 16:31:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.360 [ 00:15:34.360 { 00:15:34.360 "name": "BaseBdev2", 00:15:34.360 "aliases": [ 00:15:34.360 "e90ab4b4-61bb-459f-b7af-fa7b767a26d7" 00:15:34.360 ], 00:15:34.360 "product_name": "Malloc disk", 00:15:34.360 "block_size": 512, 00:15:34.360 "num_blocks": 65536, 00:15:34.360 "uuid": "e90ab4b4-61bb-459f-b7af-fa7b767a26d7", 00:15:34.360 "assigned_rate_limits": { 00:15:34.360 "rw_ios_per_sec": 0, 00:15:34.360 "rw_mbytes_per_sec": 0, 00:15:34.360 "r_mbytes_per_sec": 0, 00:15:34.360 "w_mbytes_per_sec": 0 00:15:34.360 }, 00:15:34.360 "claimed": true, 00:15:34.360 "claim_type": "exclusive_write", 00:15:34.360 "zoned": false, 00:15:34.360 "supported_io_types": { 00:15:34.360 "read": true, 00:15:34.360 "write": true, 00:15:34.360 "unmap": true, 00:15:34.360 "write_zeroes": true, 00:15:34.360 "flush": true, 00:15:34.360 "reset": true, 00:15:34.360 "compare": false, 00:15:34.360 "compare_and_write": false, 00:15:34.360 "abort": true, 00:15:34.360 "nvme_admin": false, 00:15:34.360 "nvme_io": false 00:15:34.360 }, 00:15:34.360 "memory_domains": [ 00:15:34.360 { 00:15:34.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.360 "dma_device_type": 2 00:15:34.360 } 00:15:34.360 ], 00:15:34.360 "driver_specific": {} 00:15:34.360 } 00:15:34.360 ] 00:15:34.360 16:31:11 -- common/autotest_common.sh@895 -- # return 0 00:15:34.360 16:31:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:34.360 16:31:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:34.360 16:31:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:34.360 16:31:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:34.360 16:31:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:34.360 16:31:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:34.361 16:31:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:34.361 16:31:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:34.361 16:31:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:34.361 16:31:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:34.361 16:31:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:34.361 16:31:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:34.361 16:31:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.361 16:31:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.618 16:31:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:34.618 "name": "Existed_Raid", 00:15:34.618 "uuid": "3576f6e9-1960-4f65-9eb4-dae0d4d81732", 00:15:34.618 "strip_size_kb": 64, 00:15:34.618 "state": "online", 00:15:34.619 "raid_level": "concat", 00:15:34.619 "superblock": true, 00:15:34.619 "num_base_bdevs": 2, 00:15:34.619 "num_base_bdevs_discovered": 2, 00:15:34.619 "num_base_bdevs_operational": 2, 00:15:34.619 "base_bdevs_list": [ 00:15:34.619 { 00:15:34.619 "name": "BaseBdev1", 00:15:34.619 "uuid": "d11bb71b-28d4-431e-a7c5-d62f3257036d", 00:15:34.619 "is_configured": true, 00:15:34.619 "data_offset": 2048, 00:15:34.619 "data_size": 63488 00:15:34.619 }, 00:15:34.619 { 00:15:34.619 "name": "BaseBdev2", 00:15:34.619 "uuid": "e90ab4b4-61bb-459f-b7af-fa7b767a26d7", 00:15:34.619 "is_configured": true, 00:15:34.619 "data_offset": 2048, 00:15:34.619 "data_size": 63488 00:15:34.619 } 00:15:34.619 ] 00:15:34.619 }' 00:15:34.619 16:31:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:34.619 16:31:11 -- common/autotest_common.sh@10 -- # set +x 00:15:35.185 16:31:11 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:35.444 [2024-07-11 16:31:12.233798] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:35.444 [2024-07-11 16:31:12.233933] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.444 [2024-07-11 16:31:12.234101] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.702 16:31:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.960 16:31:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:35.960 "name": "Existed_Raid", 00:15:35.960 "uuid": "3576f6e9-1960-4f65-9eb4-dae0d4d81732", 00:15:35.960 "strip_size_kb": 64, 00:15:35.960 "state": "offline", 00:15:35.960 "raid_level": "concat", 00:15:35.960 "superblock": true, 00:15:35.960 "num_base_bdevs": 2, 00:15:35.960 "num_base_bdevs_discovered": 1, 00:15:35.960 "num_base_bdevs_operational": 1, 00:15:35.960 "base_bdevs_list": [ 00:15:35.960 { 00:15:35.960 "name": null, 00:15:35.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.960 "is_configured": false, 00:15:35.960 "data_offset": 2048, 00:15:35.960 "data_size": 63488 00:15:35.960 }, 00:15:35.960 { 00:15:35.960 "name": "BaseBdev2", 00:15:35.960 "uuid": "e90ab4b4-61bb-459f-b7af-fa7b767a26d7", 00:15:35.960 "is_configured": true, 00:15:35.960 "data_offset": 2048, 00:15:35.960 "data_size": 63488 00:15:35.960 } 00:15:35.960 ] 00:15:35.960 }' 00:15:35.960 16:31:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:35.960 16:31:12 -- common/autotest_common.sh@10 -- # set +x 00:15:36.526 16:31:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:36.526 16:31:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:36.526 16:31:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.526 16:31:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:36.783 16:31:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:36.783 16:31:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:36.783 16:31:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:37.040 [2024-07-11 16:31:13.688860] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.040 [2024-07-11 16:31:13.689117] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:37.040 16:31:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:37.040 16:31:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:37.040 16:31:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.040 16:31:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:37.297 16:31:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:37.297 16:31:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:37.297 16:31:14 -- bdev/bdev_raid.sh@287 -- # killprocess 116048 00:15:37.297 16:31:14 -- common/autotest_common.sh@926 -- # '[' -z 116048 ']' 00:15:37.297 16:31:14 -- common/autotest_common.sh@930 -- # kill -0 116048 00:15:37.297 16:31:14 -- common/autotest_common.sh@931 -- # uname 00:15:37.297 16:31:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:37.297 16:31:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116048 00:15:37.297 killing process with pid 116048 00:15:37.297 16:31:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:37.297 16:31:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:37.297 16:31:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116048' 00:15:37.297 16:31:14 -- common/autotest_common.sh@945 -- # kill 116048 00:15:37.297 16:31:14 -- common/autotest_common.sh@950 -- # wait 116048 00:15:37.297 [2024-07-11 16:31:14.043131] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.297 [2024-07-11 16:31:14.043255] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.231 ************************************ 00:15:38.231 END TEST raid_state_function_test_sb 00:15:38.231 ************************************ 00:15:38.231 16:31:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:38.231 00:15:38.231 real 0m10.408s 00:15:38.231 user 0m18.424s 00:15:38.231 sys 0m1.128s 00:15:38.231 16:31:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.231 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:15:38.231 16:31:14 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:38.231 16:31:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:15:38.231 16:31:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:38.231 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:15:38.231 ************************************ 00:15:38.231 START TEST raid_superblock_test 00:15:38.231 ************************************ 00:15:38.231 16:31:14 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:15:38.231 16:31:14 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:15:38.231 16:31:14 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:38.231 16:31:14 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:38.231 16:31:14 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@357 -- # raid_pid=116397 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@358 -- # waitforlisten 116397 /var/tmp/spdk-raid.sock 00:15:38.231 16:31:15 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:38.231 16:31:15 -- common/autotest_common.sh@819 -- # '[' -z 116397 ']' 00:15:38.231 16:31:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:38.231 16:31:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:38.231 16:31:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:38.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:38.231 16:31:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:38.231 16:31:15 -- common/autotest_common.sh@10 -- # set +x 00:15:38.489 [2024-07-11 16:31:15.069793] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:38.489 [2024-07-11 16:31:15.070189] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116397 ] 00:15:38.489 [2024-07-11 16:31:15.235017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.754 [2024-07-11 16:31:15.390346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.754 [2024-07-11 16:31:15.558203] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.331 16:31:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:39.331 16:31:15 -- common/autotest_common.sh@852 -- # return 0 00:15:39.331 16:31:15 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:39.331 16:31:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:39.331 16:31:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:39.331 16:31:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:39.331 16:31:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:39.331 16:31:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:39.331 16:31:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:39.331 16:31:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:39.331 16:31:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:39.589 malloc1 00:15:39.589 16:31:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:39.847 [2024-07-11 16:31:16.426670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:39.847 [2024-07-11 16:31:16.426886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.847 [2024-07-11 16:31:16.427008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:39.847 [2024-07-11 16:31:16.427157] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.847 [2024-07-11 16:31:16.429251] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.847 [2024-07-11 16:31:16.429430] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:39.847 pt1 00:15:39.847 16:31:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:39.847 16:31:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:39.847 16:31:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:39.847 16:31:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:39.847 16:31:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:39.847 16:31:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:39.847 16:31:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:39.847 16:31:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:39.847 16:31:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:39.847 malloc2 00:15:40.106 16:31:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:40.106 [2024-07-11 16:31:16.819379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:40.106 [2024-07-11 16:31:16.819584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.106 [2024-07-11 16:31:16.819719] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:40.106 [2024-07-11 16:31:16.819871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.106 [2024-07-11 16:31:16.821907] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.106 [2024-07-11 16:31:16.822072] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:40.106 pt2 00:15:40.106 16:31:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:40.106 16:31:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:40.106 16:31:16 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:40.364 [2024-07-11 16:31:17.015454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:40.364 [2024-07-11 16:31:17.017077] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.364 [2024-07-11 16:31:17.017435] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:40.364 [2024-07-11 16:31:17.017547] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:40.364 [2024-07-11 16:31:17.017693] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:40.364 [2024-07-11 16:31:17.018049] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:40.364 [2024-07-11 16:31:17.018150] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:40.364 [2024-07-11 16:31:17.018360] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.364 16:31:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.622 16:31:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:40.622 "name": "raid_bdev1", 00:15:40.622 "uuid": "f6ee278e-634a-4c37-9fee-512909a61344", 00:15:40.622 "strip_size_kb": 64, 00:15:40.622 "state": "online", 00:15:40.622 "raid_level": "concat", 00:15:40.622 "superblock": true, 00:15:40.622 "num_base_bdevs": 2, 00:15:40.622 "num_base_bdevs_discovered": 2, 00:15:40.622 "num_base_bdevs_operational": 2, 00:15:40.622 "base_bdevs_list": [ 00:15:40.622 { 00:15:40.622 "name": "pt1", 00:15:40.622 "uuid": "e1d82722-9fbd-57eb-802f-2b75fff7d13f", 00:15:40.622 "is_configured": true, 00:15:40.622 "data_offset": 2048, 00:15:40.622 "data_size": 63488 00:15:40.622 }, 00:15:40.622 { 00:15:40.622 "name": "pt2", 00:15:40.622 "uuid": "41655363-3000-553f-9d60-1322408e0881", 00:15:40.622 "is_configured": true, 00:15:40.623 "data_offset": 2048, 00:15:40.623 "data_size": 63488 00:15:40.623 } 00:15:40.623 ] 00:15:40.623 }' 00:15:40.623 16:31:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:40.623 16:31:17 -- common/autotest_common.sh@10 -- # set +x 00:15:41.188 16:31:17 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:41.188 16:31:17 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:41.446 [2024-07-11 16:31:18.107768] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.446 16:31:18 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f6ee278e-634a-4c37-9fee-512909a61344 00:15:41.446 16:31:18 -- bdev/bdev_raid.sh@380 -- # '[' -z f6ee278e-634a-4c37-9fee-512909a61344 ']' 00:15:41.446 16:31:18 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:41.704 [2024-07-11 16:31:18.331625] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.704 [2024-07-11 16:31:18.331753] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.704 [2024-07-11 16:31:18.331916] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.704 [2024-07-11 16:31:18.332056] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.704 [2024-07-11 16:31:18.332145] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:41.704 16:31:18 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.704 16:31:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:41.962 16:31:18 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:41.962 16:31:18 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:41.963 16:31:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:41.963 16:31:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:41.963 16:31:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:41.963 16:31:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:42.221 16:31:18 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:42.221 16:31:18 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:42.478 16:31:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:42.478 16:31:19 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:42.478 16:31:19 -- common/autotest_common.sh@640 -- # local es=0 00:15:42.478 16:31:19 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:42.478 16:31:19 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.478 16:31:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:42.478 16:31:19 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.478 16:31:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:42.479 16:31:19 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.479 16:31:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:42.479 16:31:19 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.479 16:31:19 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:42.479 16:31:19 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:42.737 [2024-07-11 16:31:19.343785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:42.737 [2024-07-11 16:31:19.345768] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:42.737 [2024-07-11 16:31:19.345955] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:42.737 [2024-07-11 16:31:19.346145] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:42.737 [2024-07-11 16:31:19.346269] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:42.737 [2024-07-11 16:31:19.346306] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:15:42.737 request: 00:15:42.737 { 00:15:42.737 "name": "raid_bdev1", 00:15:42.737 "raid_level": "concat", 00:15:42.737 "base_bdevs": [ 00:15:42.737 "malloc1", 00:15:42.737 "malloc2" 00:15:42.737 ], 00:15:42.737 "superblock": false, 00:15:42.737 "strip_size_kb": 64, 00:15:42.737 "method": "bdev_raid_create", 00:15:42.737 "req_id": 1 00:15:42.737 } 00:15:42.737 Got JSON-RPC error response 00:15:42.737 response: 00:15:42.737 { 00:15:42.737 "code": -17, 00:15:42.737 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:42.737 } 00:15:42.737 16:31:19 -- common/autotest_common.sh@643 -- # es=1 00:15:42.737 16:31:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:42.737 16:31:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:42.737 16:31:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:42.737 16:31:19 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:42.737 16:31:19 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:42.996 [2024-07-11 16:31:19.775810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:42.996 [2024-07-11 16:31:19.776013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:42.996 [2024-07-11 16:31:19.776143] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:42.996 [2024-07-11 16:31:19.776255] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:42.996 [2024-07-11 16:31:19.778208] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:42.996 [2024-07-11 16:31:19.778373] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:42.996 [2024-07-11 16:31:19.778596] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:42.996 [2024-07-11 16:31:19.778757] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:42.996 pt1 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.996 16:31:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.254 16:31:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.254 "name": "raid_bdev1", 00:15:43.254 "uuid": "f6ee278e-634a-4c37-9fee-512909a61344", 00:15:43.254 "strip_size_kb": 64, 00:15:43.254 "state": "configuring", 00:15:43.254 "raid_level": "concat", 00:15:43.254 "superblock": true, 00:15:43.254 "num_base_bdevs": 2, 00:15:43.254 "num_base_bdevs_discovered": 1, 00:15:43.254 "num_base_bdevs_operational": 2, 00:15:43.254 "base_bdevs_list": [ 00:15:43.254 { 00:15:43.254 "name": "pt1", 00:15:43.254 "uuid": "e1d82722-9fbd-57eb-802f-2b75fff7d13f", 00:15:43.254 "is_configured": true, 00:15:43.254 "data_offset": 2048, 00:15:43.254 "data_size": 63488 00:15:43.254 }, 00:15:43.254 { 00:15:43.254 "name": null, 00:15:43.254 "uuid": "41655363-3000-553f-9d60-1322408e0881", 00:15:43.254 "is_configured": false, 00:15:43.254 "data_offset": 2048, 00:15:43.254 "data_size": 63488 00:15:43.254 } 00:15:43.254 ] 00:15:43.254 }' 00:15:43.254 16:31:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.254 16:31:19 -- common/autotest_common.sh@10 -- # set +x 00:15:43.821 16:31:20 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:43.821 16:31:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:43.821 16:31:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:43.821 16:31:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:44.079 [2024-07-11 16:31:20.764028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:44.079 [2024-07-11 16:31:20.764232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.079 [2024-07-11 16:31:20.764296] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:44.079 [2024-07-11 16:31:20.764542] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.079 [2024-07-11 16:31:20.765020] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.079 [2024-07-11 16:31:20.765172] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:44.079 [2024-07-11 16:31:20.765399] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:44.079 [2024-07-11 16:31:20.765534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:44.079 [2024-07-11 16:31:20.765679] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:15:44.079 [2024-07-11 16:31:20.765773] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:44.080 [2024-07-11 16:31:20.765933] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:44.080 [2024-07-11 16:31:20.766237] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:15:44.080 [2024-07-11 16:31:20.766342] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:15:44.080 [2024-07-11 16:31:20.766552] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.080 pt2 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.080 16:31:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.338 16:31:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.339 "name": "raid_bdev1", 00:15:44.339 "uuid": "f6ee278e-634a-4c37-9fee-512909a61344", 00:15:44.339 "strip_size_kb": 64, 00:15:44.339 "state": "online", 00:15:44.339 "raid_level": "concat", 00:15:44.339 "superblock": true, 00:15:44.339 "num_base_bdevs": 2, 00:15:44.339 "num_base_bdevs_discovered": 2, 00:15:44.339 "num_base_bdevs_operational": 2, 00:15:44.339 "base_bdevs_list": [ 00:15:44.339 { 00:15:44.339 "name": "pt1", 00:15:44.339 "uuid": "e1d82722-9fbd-57eb-802f-2b75fff7d13f", 00:15:44.339 "is_configured": true, 00:15:44.339 "data_offset": 2048, 00:15:44.339 "data_size": 63488 00:15:44.339 }, 00:15:44.339 { 00:15:44.339 "name": "pt2", 00:15:44.339 "uuid": "41655363-3000-553f-9d60-1322408e0881", 00:15:44.339 "is_configured": true, 00:15:44.339 "data_offset": 2048, 00:15:44.339 "data_size": 63488 00:15:44.339 } 00:15:44.339 ] 00:15:44.339 }' 00:15:44.339 16:31:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.339 16:31:20 -- common/autotest_common.sh@10 -- # set +x 00:15:44.905 16:31:21 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:44.905 16:31:21 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:45.163 [2024-07-11 16:31:21.880417] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.163 16:31:21 -- bdev/bdev_raid.sh@430 -- # '[' f6ee278e-634a-4c37-9fee-512909a61344 '!=' f6ee278e-634a-4c37-9fee-512909a61344 ']' 00:15:45.163 16:31:21 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:15:45.163 16:31:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:45.163 16:31:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:45.163 16:31:21 -- bdev/bdev_raid.sh@511 -- # killprocess 116397 00:15:45.163 16:31:21 -- common/autotest_common.sh@926 -- # '[' -z 116397 ']' 00:15:45.163 16:31:21 -- common/autotest_common.sh@930 -- # kill -0 116397 00:15:45.164 16:31:21 -- common/autotest_common.sh@931 -- # uname 00:15:45.164 16:31:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:45.164 16:31:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116397 00:15:45.164 16:31:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:45.164 16:31:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:45.164 16:31:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116397' 00:15:45.164 killing process with pid 116397 00:15:45.164 16:31:21 -- common/autotest_common.sh@945 -- # kill 116397 00:15:45.164 16:31:21 -- common/autotest_common.sh@950 -- # wait 116397 00:15:45.164 [2024-07-11 16:31:21.907989] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:45.164 [2024-07-11 16:31:21.908261] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.164 [2024-07-11 16:31:21.908421] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.164 [2024-07-11 16:31:21.908514] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:15:45.422 [2024-07-11 16:31:22.037235] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:46.375 ************************************ 00:15:46.375 END TEST raid_superblock_test 00:15:46.375 ************************************ 00:15:46.375 00:15:46.375 real 0m7.938s 00:15:46.375 user 0m13.775s 00:15:46.375 sys 0m0.802s 00:15:46.375 16:31:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.375 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:46.375 16:31:22 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:46.375 16:31:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:46.375 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:15:46.375 ************************************ 00:15:46.375 START TEST raid_state_function_test 00:15:46.375 ************************************ 00:15:46.375 16:31:22 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@226 -- # raid_pid=116640 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116640' 00:15:46.375 Process raid pid: 116640 00:15:46.375 16:31:22 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116640 /var/tmp/spdk-raid.sock 00:15:46.375 16:31:22 -- common/autotest_common.sh@819 -- # '[' -z 116640 ']' 00:15:46.375 16:31:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:46.375 16:31:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:46.375 16:31:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:46.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:46.375 16:31:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:46.375 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:15:46.375 [2024-07-11 16:31:23.043157] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:46.375 [2024-07-11 16:31:23.043530] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.635 [2024-07-11 16:31:23.186228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.635 [2024-07-11 16:31:23.357853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.893 [2024-07-11 16:31:23.528218] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.460 16:31:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:47.460 16:31:24 -- common/autotest_common.sh@852 -- # return 0 00:15:47.460 16:31:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:47.719 [2024-07-11 16:31:24.274604] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:47.719 [2024-07-11 16:31:24.274798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:47.719 [2024-07-11 16:31:24.274916] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.719 [2024-07-11 16:31:24.274971] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:47.719 "name": "Existed_Raid", 00:15:47.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.719 "strip_size_kb": 0, 00:15:47.719 "state": "configuring", 00:15:47.719 "raid_level": "raid1", 00:15:47.719 "superblock": false, 00:15:47.719 "num_base_bdevs": 2, 00:15:47.719 "num_base_bdevs_discovered": 0, 00:15:47.719 "num_base_bdevs_operational": 2, 00:15:47.719 "base_bdevs_list": [ 00:15:47.719 { 00:15:47.719 "name": "BaseBdev1", 00:15:47.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.719 "is_configured": false, 00:15:47.719 "data_offset": 0, 00:15:47.719 "data_size": 0 00:15:47.719 }, 00:15:47.719 { 00:15:47.719 "name": "BaseBdev2", 00:15:47.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.719 "is_configured": false, 00:15:47.719 "data_offset": 0, 00:15:47.719 "data_size": 0 00:15:47.719 } 00:15:47.719 ] 00:15:47.719 }' 00:15:47.719 16:31:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:47.719 16:31:24 -- common/autotest_common.sh@10 -- # set +x 00:15:48.656 16:31:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:48.656 [2024-07-11 16:31:25.442707] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.656 [2024-07-11 16:31:25.442849] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:48.656 16:31:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:48.915 [2024-07-11 16:31:25.622749] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:48.915 [2024-07-11 16:31:25.622942] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:48.915 [2024-07-11 16:31:25.623068] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:48.915 [2024-07-11 16:31:25.623124] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:48.915 16:31:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:49.173 [2024-07-11 16:31:25.827837] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:49.173 BaseBdev1 00:15:49.173 16:31:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:49.173 16:31:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:49.173 16:31:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:49.173 16:31:25 -- common/autotest_common.sh@889 -- # local i 00:15:49.173 16:31:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:49.173 16:31:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:49.173 16:31:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:49.432 16:31:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:49.432 [ 00:15:49.432 { 00:15:49.432 "name": "BaseBdev1", 00:15:49.432 "aliases": [ 00:15:49.432 "fcf4c6cf-3d36-46d2-9fc9-61d8c48bed33" 00:15:49.432 ], 00:15:49.432 "product_name": "Malloc disk", 00:15:49.432 "block_size": 512, 00:15:49.432 "num_blocks": 65536, 00:15:49.432 "uuid": "fcf4c6cf-3d36-46d2-9fc9-61d8c48bed33", 00:15:49.432 "assigned_rate_limits": { 00:15:49.432 "rw_ios_per_sec": 0, 00:15:49.432 "rw_mbytes_per_sec": 0, 00:15:49.432 "r_mbytes_per_sec": 0, 00:15:49.432 "w_mbytes_per_sec": 0 00:15:49.432 }, 00:15:49.432 "claimed": true, 00:15:49.432 "claim_type": "exclusive_write", 00:15:49.432 "zoned": false, 00:15:49.432 "supported_io_types": { 00:15:49.432 "read": true, 00:15:49.432 "write": true, 00:15:49.432 "unmap": true, 00:15:49.432 "write_zeroes": true, 00:15:49.432 "flush": true, 00:15:49.432 "reset": true, 00:15:49.432 "compare": false, 00:15:49.432 "compare_and_write": false, 00:15:49.432 "abort": true, 00:15:49.432 "nvme_admin": false, 00:15:49.432 "nvme_io": false 00:15:49.432 }, 00:15:49.432 "memory_domains": [ 00:15:49.432 { 00:15:49.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.432 "dma_device_type": 2 00:15:49.432 } 00:15:49.432 ], 00:15:49.432 "driver_specific": {} 00:15:49.432 } 00:15:49.432 ] 00:15:49.737 16:31:26 -- common/autotest_common.sh@895 -- # return 0 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.737 "name": "Existed_Raid", 00:15:49.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.737 "strip_size_kb": 0, 00:15:49.737 "state": "configuring", 00:15:49.737 "raid_level": "raid1", 00:15:49.737 "superblock": false, 00:15:49.737 "num_base_bdevs": 2, 00:15:49.737 "num_base_bdevs_discovered": 1, 00:15:49.737 "num_base_bdevs_operational": 2, 00:15:49.737 "base_bdevs_list": [ 00:15:49.737 { 00:15:49.737 "name": "BaseBdev1", 00:15:49.737 "uuid": "fcf4c6cf-3d36-46d2-9fc9-61d8c48bed33", 00:15:49.737 "is_configured": true, 00:15:49.737 "data_offset": 0, 00:15:49.737 "data_size": 65536 00:15:49.737 }, 00:15:49.737 { 00:15:49.737 "name": "BaseBdev2", 00:15:49.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.737 "is_configured": false, 00:15:49.737 "data_offset": 0, 00:15:49.737 "data_size": 0 00:15:49.737 } 00:15:49.737 ] 00:15:49.737 }' 00:15:49.737 16:31:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.737 16:31:26 -- common/autotest_common.sh@10 -- # set +x 00:15:50.304 16:31:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:50.564 [2024-07-11 16:31:27.220087] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.564 [2024-07-11 16:31:27.220250] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:50.564 16:31:27 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:50.564 16:31:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:50.829 [2024-07-11 16:31:27.464180] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.829 [2024-07-11 16:31:27.465912] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.829 [2024-07-11 16:31:27.466083] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.829 16:31:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.087 16:31:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:51.087 "name": "Existed_Raid", 00:15:51.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.087 "strip_size_kb": 0, 00:15:51.087 "state": "configuring", 00:15:51.087 "raid_level": "raid1", 00:15:51.087 "superblock": false, 00:15:51.087 "num_base_bdevs": 2, 00:15:51.087 "num_base_bdevs_discovered": 1, 00:15:51.087 "num_base_bdevs_operational": 2, 00:15:51.087 "base_bdevs_list": [ 00:15:51.087 { 00:15:51.087 "name": "BaseBdev1", 00:15:51.087 "uuid": "fcf4c6cf-3d36-46d2-9fc9-61d8c48bed33", 00:15:51.087 "is_configured": true, 00:15:51.087 "data_offset": 0, 00:15:51.087 "data_size": 65536 00:15:51.087 }, 00:15:51.087 { 00:15:51.087 "name": "BaseBdev2", 00:15:51.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.087 "is_configured": false, 00:15:51.087 "data_offset": 0, 00:15:51.087 "data_size": 0 00:15:51.087 } 00:15:51.087 ] 00:15:51.087 }' 00:15:51.087 16:31:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:51.087 16:31:27 -- common/autotest_common.sh@10 -- # set +x 00:15:51.683 16:31:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:51.940 [2024-07-11 16:31:28.508029] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.940 [2024-07-11 16:31:28.508230] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:51.940 [2024-07-11 16:31:28.508269] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:51.940 [2024-07-11 16:31:28.508472] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:15:51.940 [2024-07-11 16:31:28.508911] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:51.940 [2024-07-11 16:31:28.509105] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:51.940 [2024-07-11 16:31:28.509516] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.940 BaseBdev2 00:15:51.940 16:31:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:51.940 16:31:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:15:51.940 16:31:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:51.940 16:31:28 -- common/autotest_common.sh@889 -- # local i 00:15:51.940 16:31:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:51.940 16:31:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:51.940 16:31:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:51.940 16:31:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:52.198 [ 00:15:52.198 { 00:15:52.198 "name": "BaseBdev2", 00:15:52.198 "aliases": [ 00:15:52.198 "7feeba5e-e21d-400a-8831-c26802c086f1" 00:15:52.198 ], 00:15:52.198 "product_name": "Malloc disk", 00:15:52.198 "block_size": 512, 00:15:52.198 "num_blocks": 65536, 00:15:52.199 "uuid": "7feeba5e-e21d-400a-8831-c26802c086f1", 00:15:52.199 "assigned_rate_limits": { 00:15:52.199 "rw_ios_per_sec": 0, 00:15:52.199 "rw_mbytes_per_sec": 0, 00:15:52.199 "r_mbytes_per_sec": 0, 00:15:52.199 "w_mbytes_per_sec": 0 00:15:52.199 }, 00:15:52.199 "claimed": true, 00:15:52.199 "claim_type": "exclusive_write", 00:15:52.199 "zoned": false, 00:15:52.199 "supported_io_types": { 00:15:52.199 "read": true, 00:15:52.199 "write": true, 00:15:52.199 "unmap": true, 00:15:52.199 "write_zeroes": true, 00:15:52.199 "flush": true, 00:15:52.199 "reset": true, 00:15:52.199 "compare": false, 00:15:52.199 "compare_and_write": false, 00:15:52.199 "abort": true, 00:15:52.199 "nvme_admin": false, 00:15:52.199 "nvme_io": false 00:15:52.199 }, 00:15:52.199 "memory_domains": [ 00:15:52.199 { 00:15:52.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.199 "dma_device_type": 2 00:15:52.199 } 00:15:52.199 ], 00:15:52.199 "driver_specific": {} 00:15:52.199 } 00:15:52.199 ] 00:15:52.199 16:31:28 -- common/autotest_common.sh@895 -- # return 0 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.199 16:31:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.457 16:31:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.457 "name": "Existed_Raid", 00:15:52.457 "uuid": "1382d8e9-aebf-40d0-b5b1-92d94c432d7f", 00:15:52.457 "strip_size_kb": 0, 00:15:52.457 "state": "online", 00:15:52.457 "raid_level": "raid1", 00:15:52.457 "superblock": false, 00:15:52.457 "num_base_bdevs": 2, 00:15:52.457 "num_base_bdevs_discovered": 2, 00:15:52.457 "num_base_bdevs_operational": 2, 00:15:52.457 "base_bdevs_list": [ 00:15:52.457 { 00:15:52.457 "name": "BaseBdev1", 00:15:52.457 "uuid": "fcf4c6cf-3d36-46d2-9fc9-61d8c48bed33", 00:15:52.457 "is_configured": true, 00:15:52.457 "data_offset": 0, 00:15:52.457 "data_size": 65536 00:15:52.457 }, 00:15:52.457 { 00:15:52.457 "name": "BaseBdev2", 00:15:52.457 "uuid": "7feeba5e-e21d-400a-8831-c26802c086f1", 00:15:52.457 "is_configured": true, 00:15:52.457 "data_offset": 0, 00:15:52.457 "data_size": 65536 00:15:52.457 } 00:15:52.457 ] 00:15:52.457 }' 00:15:52.457 16:31:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.457 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:15:53.391 16:31:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:53.391 [2024-07-11 16:31:30.096392] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.391 16:31:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.648 16:31:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.648 "name": "Existed_Raid", 00:15:53.648 "uuid": "1382d8e9-aebf-40d0-b5b1-92d94c432d7f", 00:15:53.648 "strip_size_kb": 0, 00:15:53.648 "state": "online", 00:15:53.648 "raid_level": "raid1", 00:15:53.648 "superblock": false, 00:15:53.648 "num_base_bdevs": 2, 00:15:53.648 "num_base_bdevs_discovered": 1, 00:15:53.648 "num_base_bdevs_operational": 1, 00:15:53.648 "base_bdevs_list": [ 00:15:53.648 { 00:15:53.648 "name": null, 00:15:53.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.648 "is_configured": false, 00:15:53.648 "data_offset": 0, 00:15:53.648 "data_size": 65536 00:15:53.648 }, 00:15:53.648 { 00:15:53.648 "name": "BaseBdev2", 00:15:53.648 "uuid": "7feeba5e-e21d-400a-8831-c26802c086f1", 00:15:53.648 "is_configured": true, 00:15:53.648 "data_offset": 0, 00:15:53.648 "data_size": 65536 00:15:53.648 } 00:15:53.648 ] 00:15:53.648 }' 00:15:53.648 16:31:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.648 16:31:30 -- common/autotest_common.sh@10 -- # set +x 00:15:54.583 16:31:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:54.583 16:31:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:54.583 16:31:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.583 16:31:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:54.583 16:31:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:54.583 16:31:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.583 16:31:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:54.841 [2024-07-11 16:31:31.513271] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.841 [2024-07-11 16:31:31.513414] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.841 [2024-07-11 16:31:31.513594] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.841 [2024-07-11 16:31:31.576476] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.841 [2024-07-11 16:31:31.576695] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:54.841 16:31:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:54.841 16:31:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:54.841 16:31:31 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.841 16:31:31 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:55.100 16:31:31 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:55.100 16:31:31 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:55.100 16:31:31 -- bdev/bdev_raid.sh@287 -- # killprocess 116640 00:15:55.100 16:31:31 -- common/autotest_common.sh@926 -- # '[' -z 116640 ']' 00:15:55.100 16:31:31 -- common/autotest_common.sh@930 -- # kill -0 116640 00:15:55.100 16:31:31 -- common/autotest_common.sh@931 -- # uname 00:15:55.100 16:31:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:55.100 16:31:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116640 00:15:55.100 killing process with pid 116640 00:15:55.100 16:31:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:55.100 16:31:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:55.100 16:31:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116640' 00:15:55.100 16:31:31 -- common/autotest_common.sh@945 -- # kill 116640 00:15:55.100 16:31:31 -- common/autotest_common.sh@950 -- # wait 116640 00:15:55.100 [2024-07-11 16:31:31.839756] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.100 [2024-07-11 16:31:31.839881] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.035 ************************************ 00:15:56.035 END TEST raid_state_function_test 00:15:56.035 ************************************ 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:56.035 00:15:56.035 real 0m9.755s 00:15:56.035 user 0m17.228s 00:15:56.035 sys 0m1.094s 00:15:56.035 16:31:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.035 16:31:32 -- common/autotest_common.sh@10 -- # set +x 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:56.035 16:31:32 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:15:56.035 16:31:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:56.035 16:31:32 -- common/autotest_common.sh@10 -- # set +x 00:15:56.035 ************************************ 00:15:56.035 START TEST raid_state_function_test_sb 00:15:56.035 ************************************ 00:15:56.035 16:31:32 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=116959 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116959' 00:15:56.035 Process raid pid: 116959 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:56.035 16:31:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116959 /var/tmp/spdk-raid.sock 00:15:56.035 16:31:32 -- common/autotest_common.sh@819 -- # '[' -z 116959 ']' 00:15:56.035 16:31:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:56.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:56.035 16:31:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:56.035 16:31:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:56.035 16:31:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:56.035 16:31:32 -- common/autotest_common.sh@10 -- # set +x 00:15:56.295 [2024-07-11 16:31:32.874921] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:56.295 [2024-07-11 16:31:32.875298] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.295 [2024-07-11 16:31:33.039944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.555 [2024-07-11 16:31:33.212223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.813 [2024-07-11 16:31:33.379266] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.070 16:31:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:57.070 16:31:33 -- common/autotest_common.sh@852 -- # return 0 00:15:57.070 16:31:33 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:57.329 [2024-07-11 16:31:34.013750] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.329 [2024-07-11 16:31:34.013950] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.329 [2024-07-11 16:31:34.014049] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.329 [2024-07-11 16:31:34.014168] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.329 16:31:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.587 16:31:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.587 "name": "Existed_Raid", 00:15:57.587 "uuid": "30771d6f-afa5-42ca-a57d-6d5b0c561f5f", 00:15:57.587 "strip_size_kb": 0, 00:15:57.587 "state": "configuring", 00:15:57.587 "raid_level": "raid1", 00:15:57.587 "superblock": true, 00:15:57.587 "num_base_bdevs": 2, 00:15:57.587 "num_base_bdevs_discovered": 0, 00:15:57.587 "num_base_bdevs_operational": 2, 00:15:57.587 "base_bdevs_list": [ 00:15:57.587 { 00:15:57.587 "name": "BaseBdev1", 00:15:57.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.587 "is_configured": false, 00:15:57.587 "data_offset": 0, 00:15:57.587 "data_size": 0 00:15:57.587 }, 00:15:57.587 { 00:15:57.587 "name": "BaseBdev2", 00:15:57.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.587 "is_configured": false, 00:15:57.587 "data_offset": 0, 00:15:57.587 "data_size": 0 00:15:57.587 } 00:15:57.587 ] 00:15:57.587 }' 00:15:57.587 16:31:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.587 16:31:34 -- common/autotest_common.sh@10 -- # set +x 00:15:58.155 16:31:34 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:58.413 [2024-07-11 16:31:35.165807] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.413 [2024-07-11 16:31:35.165946] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:58.413 16:31:35 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:58.671 [2024-07-11 16:31:35.385891] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.671 [2024-07-11 16:31:35.386081] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.671 [2024-07-11 16:31:35.386178] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.671 [2024-07-11 16:31:35.386233] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.671 16:31:35 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:58.929 [2024-07-11 16:31:35.603622] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.929 BaseBdev1 00:15:58.929 16:31:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:58.929 16:31:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:15:58.929 16:31:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:58.929 16:31:35 -- common/autotest_common.sh@889 -- # local i 00:15:58.929 16:31:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:58.929 16:31:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:58.929 16:31:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:59.188 16:31:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:59.188 [ 00:15:59.188 { 00:15:59.188 "name": "BaseBdev1", 00:15:59.188 "aliases": [ 00:15:59.188 "03f7524c-7610-4fcf-a347-fadd356f8336" 00:15:59.188 ], 00:15:59.188 "product_name": "Malloc disk", 00:15:59.188 "block_size": 512, 00:15:59.188 "num_blocks": 65536, 00:15:59.188 "uuid": "03f7524c-7610-4fcf-a347-fadd356f8336", 00:15:59.188 "assigned_rate_limits": { 00:15:59.188 "rw_ios_per_sec": 0, 00:15:59.188 "rw_mbytes_per_sec": 0, 00:15:59.188 "r_mbytes_per_sec": 0, 00:15:59.188 "w_mbytes_per_sec": 0 00:15:59.188 }, 00:15:59.188 "claimed": true, 00:15:59.188 "claim_type": "exclusive_write", 00:15:59.188 "zoned": false, 00:15:59.188 "supported_io_types": { 00:15:59.188 "read": true, 00:15:59.188 "write": true, 00:15:59.188 "unmap": true, 00:15:59.188 "write_zeroes": true, 00:15:59.188 "flush": true, 00:15:59.188 "reset": true, 00:15:59.188 "compare": false, 00:15:59.188 "compare_and_write": false, 00:15:59.188 "abort": true, 00:15:59.188 "nvme_admin": false, 00:15:59.188 "nvme_io": false 00:15:59.188 }, 00:15:59.188 "memory_domains": [ 00:15:59.188 { 00:15:59.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.188 "dma_device_type": 2 00:15:59.188 } 00:15:59.188 ], 00:15:59.188 "driver_specific": {} 00:15:59.188 } 00:15:59.188 ] 00:15:59.188 16:31:35 -- common/autotest_common.sh@895 -- # return 0 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.188 16:31:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.446 16:31:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.446 "name": "Existed_Raid", 00:15:59.446 "uuid": "d759c5d1-d55d-45a2-aa30-a15c96cdd56b", 00:15:59.446 "strip_size_kb": 0, 00:15:59.446 "state": "configuring", 00:15:59.446 "raid_level": "raid1", 00:15:59.446 "superblock": true, 00:15:59.446 "num_base_bdevs": 2, 00:15:59.446 "num_base_bdevs_discovered": 1, 00:15:59.446 "num_base_bdevs_operational": 2, 00:15:59.446 "base_bdevs_list": [ 00:15:59.446 { 00:15:59.446 "name": "BaseBdev1", 00:15:59.446 "uuid": "03f7524c-7610-4fcf-a347-fadd356f8336", 00:15:59.446 "is_configured": true, 00:15:59.446 "data_offset": 2048, 00:15:59.446 "data_size": 63488 00:15:59.446 }, 00:15:59.446 { 00:15:59.446 "name": "BaseBdev2", 00:15:59.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.446 "is_configured": false, 00:15:59.446 "data_offset": 0, 00:15:59.446 "data_size": 0 00:15:59.446 } 00:15:59.446 ] 00:15:59.446 }' 00:15:59.446 16:31:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.446 16:31:36 -- common/autotest_common.sh@10 -- # set +x 00:16:00.381 16:31:36 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:00.381 [2024-07-11 16:31:36.995883] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.381 [2024-07-11 16:31:36.996037] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:00.381 16:31:37 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:00.381 16:31:37 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:00.640 16:31:37 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.898 BaseBdev1 00:16:00.898 16:31:37 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:00.898 16:31:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:00.898 16:31:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:00.898 16:31:37 -- common/autotest_common.sh@889 -- # local i 00:16:00.898 16:31:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:00.898 16:31:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:00.898 16:31:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.898 16:31:37 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:01.157 [ 00:16:01.157 { 00:16:01.157 "name": "BaseBdev1", 00:16:01.157 "aliases": [ 00:16:01.157 "8d6c5f13-262a-46ce-986f-e1e2ab7d815f" 00:16:01.157 ], 00:16:01.157 "product_name": "Malloc disk", 00:16:01.157 "block_size": 512, 00:16:01.157 "num_blocks": 65536, 00:16:01.157 "uuid": "8d6c5f13-262a-46ce-986f-e1e2ab7d815f", 00:16:01.157 "assigned_rate_limits": { 00:16:01.157 "rw_ios_per_sec": 0, 00:16:01.157 "rw_mbytes_per_sec": 0, 00:16:01.157 "r_mbytes_per_sec": 0, 00:16:01.157 "w_mbytes_per_sec": 0 00:16:01.157 }, 00:16:01.157 "claimed": false, 00:16:01.157 "zoned": false, 00:16:01.157 "supported_io_types": { 00:16:01.157 "read": true, 00:16:01.157 "write": true, 00:16:01.157 "unmap": true, 00:16:01.157 "write_zeroes": true, 00:16:01.157 "flush": true, 00:16:01.157 "reset": true, 00:16:01.157 "compare": false, 00:16:01.157 "compare_and_write": false, 00:16:01.157 "abort": true, 00:16:01.157 "nvme_admin": false, 00:16:01.157 "nvme_io": false 00:16:01.157 }, 00:16:01.157 "memory_domains": [ 00:16:01.157 { 00:16:01.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.157 "dma_device_type": 2 00:16:01.157 } 00:16:01.157 ], 00:16:01.157 "driver_specific": {} 00:16:01.157 } 00:16:01.157 ] 00:16:01.157 16:31:37 -- common/autotest_common.sh@895 -- # return 0 00:16:01.157 16:31:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:01.416 [2024-07-11 16:31:38.045937] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.416 [2024-07-11 16:31:38.047574] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.416 [2024-07-11 16:31:38.047737] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.416 16:31:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.674 16:31:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.674 "name": "Existed_Raid", 00:16:01.674 "uuid": "6ffc3949-d95a-41ca-bb56-7ffa96bd7e58", 00:16:01.674 "strip_size_kb": 0, 00:16:01.674 "state": "configuring", 00:16:01.674 "raid_level": "raid1", 00:16:01.674 "superblock": true, 00:16:01.674 "num_base_bdevs": 2, 00:16:01.674 "num_base_bdevs_discovered": 1, 00:16:01.674 "num_base_bdevs_operational": 2, 00:16:01.674 "base_bdevs_list": [ 00:16:01.674 { 00:16:01.674 "name": "BaseBdev1", 00:16:01.674 "uuid": "8d6c5f13-262a-46ce-986f-e1e2ab7d815f", 00:16:01.674 "is_configured": true, 00:16:01.674 "data_offset": 2048, 00:16:01.674 "data_size": 63488 00:16:01.674 }, 00:16:01.674 { 00:16:01.674 "name": "BaseBdev2", 00:16:01.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.674 "is_configured": false, 00:16:01.674 "data_offset": 0, 00:16:01.674 "data_size": 0 00:16:01.674 } 00:16:01.674 ] 00:16:01.674 }' 00:16:01.674 16:31:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.674 16:31:38 -- common/autotest_common.sh@10 -- # set +x 00:16:02.242 16:31:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:02.501 [2024-07-11 16:31:39.173562] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.501 [2024-07-11 16:31:39.173960] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:02.501 BaseBdev2 00:16:02.501 [2024-07-11 16:31:39.174417] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:02.501 [2024-07-11 16:31:39.174636] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:02.501 [2024-07-11 16:31:39.178014] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:02.501 [2024-07-11 16:31:39.178308] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:02.501 [2024-07-11 16:31:39.178955] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.501 16:31:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:02.501 16:31:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:02.501 16:31:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:02.501 16:31:39 -- common/autotest_common.sh@889 -- # local i 00:16:02.501 16:31:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:02.501 16:31:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:02.501 16:31:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.759 16:31:39 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.025 [ 00:16:03.025 { 00:16:03.025 "name": "BaseBdev2", 00:16:03.025 "aliases": [ 00:16:03.025 "b6491e07-54d1-4ed1-9380-0e3c18fe6082" 00:16:03.025 ], 00:16:03.025 "product_name": "Malloc disk", 00:16:03.025 "block_size": 512, 00:16:03.025 "num_blocks": 65536, 00:16:03.025 "uuid": "b6491e07-54d1-4ed1-9380-0e3c18fe6082", 00:16:03.025 "assigned_rate_limits": { 00:16:03.025 "rw_ios_per_sec": 0, 00:16:03.025 "rw_mbytes_per_sec": 0, 00:16:03.025 "r_mbytes_per_sec": 0, 00:16:03.025 "w_mbytes_per_sec": 0 00:16:03.025 }, 00:16:03.025 "claimed": true, 00:16:03.025 "claim_type": "exclusive_write", 00:16:03.025 "zoned": false, 00:16:03.025 "supported_io_types": { 00:16:03.025 "read": true, 00:16:03.025 "write": true, 00:16:03.025 "unmap": true, 00:16:03.025 "write_zeroes": true, 00:16:03.025 "flush": true, 00:16:03.025 "reset": true, 00:16:03.025 "compare": false, 00:16:03.025 "compare_and_write": false, 00:16:03.025 "abort": true, 00:16:03.025 "nvme_admin": false, 00:16:03.025 "nvme_io": false 00:16:03.025 }, 00:16:03.025 "memory_domains": [ 00:16:03.025 { 00:16:03.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.025 "dma_device_type": 2 00:16:03.025 } 00:16:03.025 ], 00:16:03.025 "driver_specific": {} 00:16:03.025 } 00:16:03.025 ] 00:16:03.025 16:31:39 -- common/autotest_common.sh@895 -- # return 0 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.025 "name": "Existed_Raid", 00:16:03.025 "uuid": "6ffc3949-d95a-41ca-bb56-7ffa96bd7e58", 00:16:03.025 "strip_size_kb": 0, 00:16:03.025 "state": "online", 00:16:03.025 "raid_level": "raid1", 00:16:03.025 "superblock": true, 00:16:03.025 "num_base_bdevs": 2, 00:16:03.025 "num_base_bdevs_discovered": 2, 00:16:03.025 "num_base_bdevs_operational": 2, 00:16:03.025 "base_bdevs_list": [ 00:16:03.025 { 00:16:03.025 "name": "BaseBdev1", 00:16:03.025 "uuid": "8d6c5f13-262a-46ce-986f-e1e2ab7d815f", 00:16:03.025 "is_configured": true, 00:16:03.025 "data_offset": 2048, 00:16:03.025 "data_size": 63488 00:16:03.025 }, 00:16:03.025 { 00:16:03.025 "name": "BaseBdev2", 00:16:03.025 "uuid": "b6491e07-54d1-4ed1-9380-0e3c18fe6082", 00:16:03.025 "is_configured": true, 00:16:03.025 "data_offset": 2048, 00:16:03.025 "data_size": 63488 00:16:03.025 } 00:16:03.025 ] 00:16:03.025 }' 00:16:03.025 16:31:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.025 16:31:39 -- common/autotest_common.sh@10 -- # set +x 00:16:03.603 16:31:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:03.861 [2024-07-11 16:31:40.638982] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:04.119 "name": "Existed_Raid", 00:16:04.119 "uuid": "6ffc3949-d95a-41ca-bb56-7ffa96bd7e58", 00:16:04.119 "strip_size_kb": 0, 00:16:04.119 "state": "online", 00:16:04.119 "raid_level": "raid1", 00:16:04.119 "superblock": true, 00:16:04.119 "num_base_bdevs": 2, 00:16:04.119 "num_base_bdevs_discovered": 1, 00:16:04.119 "num_base_bdevs_operational": 1, 00:16:04.119 "base_bdevs_list": [ 00:16:04.119 { 00:16:04.119 "name": null, 00:16:04.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.119 "is_configured": false, 00:16:04.119 "data_offset": 2048, 00:16:04.119 "data_size": 63488 00:16:04.119 }, 00:16:04.119 { 00:16:04.119 "name": "BaseBdev2", 00:16:04.119 "uuid": "b6491e07-54d1-4ed1-9380-0e3c18fe6082", 00:16:04.119 "is_configured": true, 00:16:04.119 "data_offset": 2048, 00:16:04.119 "data_size": 63488 00:16:04.119 } 00:16:04.119 ] 00:16:04.119 }' 00:16:04.119 16:31:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:04.119 16:31:40 -- common/autotest_common.sh@10 -- # set +x 00:16:04.686 16:31:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:04.686 16:31:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:04.686 16:31:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.686 16:31:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:04.943 16:31:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:04.943 16:31:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.943 16:31:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:05.201 [2024-07-11 16:31:41.897550] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:05.201 [2024-07-11 16:31:41.897708] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.201 [2024-07-11 16:31:41.897860] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.201 [2024-07-11 16:31:41.960654] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.201 [2024-07-11 16:31:41.960821] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:05.201 16:31:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:05.201 16:31:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:05.201 16:31:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.201 16:31:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:05.459 16:31:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:05.459 16:31:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:05.459 16:31:42 -- bdev/bdev_raid.sh@287 -- # killprocess 116959 00:16:05.459 16:31:42 -- common/autotest_common.sh@926 -- # '[' -z 116959 ']' 00:16:05.459 16:31:42 -- common/autotest_common.sh@930 -- # kill -0 116959 00:16:05.459 16:31:42 -- common/autotest_common.sh@931 -- # uname 00:16:05.459 16:31:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.459 16:31:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 116959 00:16:05.459 killing process with pid 116959 00:16:05.459 16:31:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.459 16:31:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.459 16:31:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 116959' 00:16:05.459 16:31:42 -- common/autotest_common.sh@945 -- # kill 116959 00:16:05.459 16:31:42 -- common/autotest_common.sh@950 -- # wait 116959 00:16:05.459 [2024-07-11 16:31:42.193192] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.459 [2024-07-11 16:31:42.193315] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.394 ************************************ 00:16:06.394 END TEST raid_state_function_test_sb 00:16:06.394 ************************************ 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:06.394 00:16:06.394 real 0m10.294s 00:16:06.394 user 0m18.198s 00:16:06.394 sys 0m1.098s 00:16:06.394 16:31:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.394 16:31:43 -- common/autotest_common.sh@10 -- # set +x 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:06.394 16:31:43 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:06.394 16:31:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:06.394 16:31:43 -- common/autotest_common.sh@10 -- # set +x 00:16:06.394 ************************************ 00:16:06.394 START TEST raid_superblock_test 00:16:06.394 ************************************ 00:16:06.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:06.394 16:31:43 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@357 -- # raid_pid=117294 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@358 -- # waitforlisten 117294 /var/tmp/spdk-raid.sock 00:16:06.394 16:31:43 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:06.394 16:31:43 -- common/autotest_common.sh@819 -- # '[' -z 117294 ']' 00:16:06.394 16:31:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:06.394 16:31:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:06.394 16:31:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:06.394 16:31:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:06.394 16:31:43 -- common/autotest_common.sh@10 -- # set +x 00:16:06.653 [2024-07-11 16:31:43.214272] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:06.653 [2024-07-11 16:31:43.214710] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117294 ] 00:16:06.653 [2024-07-11 16:31:43.385773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.912 [2024-07-11 16:31:43.613387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.170 [2024-07-11 16:31:43.786542] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.428 16:31:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:07.428 16:31:44 -- common/autotest_common.sh@852 -- # return 0 00:16:07.428 16:31:44 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:07.428 16:31:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:07.428 16:31:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:07.428 16:31:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:07.428 16:31:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:07.428 16:31:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.428 16:31:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.428 16:31:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.428 16:31:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:07.687 malloc1 00:16:07.687 16:31:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.946 [2024-07-11 16:31:44.649370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.946 [2024-07-11 16:31:44.649627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.946 [2024-07-11 16:31:44.649690] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:07.946 [2024-07-11 16:31:44.649952] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.946 [2024-07-11 16:31:44.652018] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.946 [2024-07-11 16:31:44.652177] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.946 pt1 00:16:07.946 16:31:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:07.946 16:31:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:07.946 16:31:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:07.946 16:31:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:07.946 16:31:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:07.946 16:31:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:07.946 16:31:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:07.946 16:31:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:07.946 16:31:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:08.204 malloc2 00:16:08.204 16:31:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:08.576 [2024-07-11 16:31:45.123326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:08.576 [2024-07-11 16:31:45.123531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.576 [2024-07-11 16:31:45.123665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:08.576 [2024-07-11 16:31:45.123818] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.576 [2024-07-11 16:31:45.125892] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.577 [2024-07-11 16:31:45.126054] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:08.577 pt2 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:08.577 [2024-07-11 16:31:45.303397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:08.577 [2024-07-11 16:31:45.305010] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.577 [2024-07-11 16:31:45.305322] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:16:08.577 [2024-07-11 16:31:45.305437] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:08.577 [2024-07-11 16:31:45.305605] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:08.577 [2024-07-11 16:31:45.306084] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:16:08.577 [2024-07-11 16:31:45.306219] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:16:08.577 [2024-07-11 16:31:45.306435] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.577 16:31:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.833 16:31:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.833 "name": "raid_bdev1", 00:16:08.833 "uuid": "5d7139eb-f97e-4311-97e5-7e18ac980eff", 00:16:08.833 "strip_size_kb": 0, 00:16:08.833 "state": "online", 00:16:08.833 "raid_level": "raid1", 00:16:08.833 "superblock": true, 00:16:08.833 "num_base_bdevs": 2, 00:16:08.833 "num_base_bdevs_discovered": 2, 00:16:08.833 "num_base_bdevs_operational": 2, 00:16:08.833 "base_bdevs_list": [ 00:16:08.833 { 00:16:08.833 "name": "pt1", 00:16:08.833 "uuid": "254962f0-230b-5ece-925a-409b138e5d8f", 00:16:08.833 "is_configured": true, 00:16:08.833 "data_offset": 2048, 00:16:08.833 "data_size": 63488 00:16:08.833 }, 00:16:08.833 { 00:16:08.833 "name": "pt2", 00:16:08.833 "uuid": "4dfff0a5-4778-5d2f-99ca-00fd2ae3bb67", 00:16:08.833 "is_configured": true, 00:16:08.833 "data_offset": 2048, 00:16:08.833 "data_size": 63488 00:16:08.833 } 00:16:08.833 ] 00:16:08.833 }' 00:16:08.833 16:31:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.833 16:31:45 -- common/autotest_common.sh@10 -- # set +x 00:16:09.769 16:31:46 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:09.769 16:31:46 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:09.769 [2024-07-11 16:31:46.503753] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.769 16:31:46 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5d7139eb-f97e-4311-97e5-7e18ac980eff 00:16:09.769 16:31:46 -- bdev/bdev_raid.sh@380 -- # '[' -z 5d7139eb-f97e-4311-97e5-7e18ac980eff ']' 00:16:09.769 16:31:46 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:10.029 [2024-07-11 16:31:46.691597] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:10.029 [2024-07-11 16:31:46.691723] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:10.029 [2024-07-11 16:31:46.691903] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.029 [2024-07-11 16:31:46.692081] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.029 [2024-07-11 16:31:46.692197] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:10.029 16:31:46 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:10.029 16:31:46 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.287 16:31:46 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:10.287 16:31:46 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:10.287 16:31:46 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:10.287 16:31:46 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:10.546 16:31:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:10.546 16:31:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:10.805 16:31:47 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:10.805 16:31:47 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:10.805 16:31:47 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:10.805 16:31:47 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:10.805 16:31:47 -- common/autotest_common.sh@640 -- # local es=0 00:16:10.805 16:31:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:10.805 16:31:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.805 16:31:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.805 16:31:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.805 16:31:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.805 16:31:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.805 16:31:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:10.805 16:31:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.805 16:31:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:10.805 16:31:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:11.065 [2024-07-11 16:31:47.723747] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:11.065 [2024-07-11 16:31:47.725409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:11.065 [2024-07-11 16:31:47.725585] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:11.065 [2024-07-11 16:31:47.725762] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:11.065 [2024-07-11 16:31:47.725893] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:11.065 [2024-07-11 16:31:47.725929] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:16:11.065 request: 00:16:11.065 { 00:16:11.065 "name": "raid_bdev1", 00:16:11.065 "raid_level": "raid1", 00:16:11.065 "base_bdevs": [ 00:16:11.065 "malloc1", 00:16:11.065 "malloc2" 00:16:11.065 ], 00:16:11.065 "superblock": false, 00:16:11.065 "method": "bdev_raid_create", 00:16:11.065 "req_id": 1 00:16:11.065 } 00:16:11.065 Got JSON-RPC error response 00:16:11.065 response: 00:16:11.065 { 00:16:11.065 "code": -17, 00:16:11.065 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:11.065 } 00:16:11.065 16:31:47 -- common/autotest_common.sh@643 -- # es=1 00:16:11.065 16:31:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:11.065 16:31:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:11.065 16:31:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:11.065 16:31:47 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.065 16:31:47 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:11.323 16:31:47 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:11.323 16:31:47 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:11.323 16:31:47 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:11.323 [2024-07-11 16:31:48.091761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:11.323 [2024-07-11 16:31:48.092009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.323 [2024-07-11 16:31:48.092075] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:11.323 [2024-07-11 16:31:48.092302] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.323 [2024-07-11 16:31:48.094264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.323 [2024-07-11 16:31:48.094439] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:11.323 [2024-07-11 16:31:48.094621] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:11.323 [2024-07-11 16:31:48.094763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:11.323 pt1 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.323 16:31:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.582 16:31:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.582 "name": "raid_bdev1", 00:16:11.582 "uuid": "5d7139eb-f97e-4311-97e5-7e18ac980eff", 00:16:11.582 "strip_size_kb": 0, 00:16:11.582 "state": "configuring", 00:16:11.582 "raid_level": "raid1", 00:16:11.582 "superblock": true, 00:16:11.582 "num_base_bdevs": 2, 00:16:11.582 "num_base_bdevs_discovered": 1, 00:16:11.582 "num_base_bdevs_operational": 2, 00:16:11.582 "base_bdevs_list": [ 00:16:11.582 { 00:16:11.582 "name": "pt1", 00:16:11.582 "uuid": "254962f0-230b-5ece-925a-409b138e5d8f", 00:16:11.582 "is_configured": true, 00:16:11.582 "data_offset": 2048, 00:16:11.582 "data_size": 63488 00:16:11.582 }, 00:16:11.582 { 00:16:11.582 "name": null, 00:16:11.582 "uuid": "4dfff0a5-4778-5d2f-99ca-00fd2ae3bb67", 00:16:11.582 "is_configured": false, 00:16:11.582 "data_offset": 2048, 00:16:11.582 "data_size": 63488 00:16:11.582 } 00:16:11.582 ] 00:16:11.582 }' 00:16:11.582 16:31:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.582 16:31:48 -- common/autotest_common.sh@10 -- # set +x 00:16:12.517 16:31:48 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:16:12.517 16:31:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:12.517 16:31:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:12.517 16:31:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:12.517 [2024-07-11 16:31:49.127958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:12.517 [2024-07-11 16:31:49.128155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:12.517 [2024-07-11 16:31:49.128217] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:12.517 [2024-07-11 16:31:49.128460] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:12.517 [2024-07-11 16:31:49.128900] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:12.517 [2024-07-11 16:31:49.129123] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:12.517 [2024-07-11 16:31:49.129351] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:12.517 [2024-07-11 16:31:49.129505] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:12.517 [2024-07-11 16:31:49.129680] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:16:12.517 [2024-07-11 16:31:49.129858] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:12.517 [2024-07-11 16:31:49.130044] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:12.517 [2024-07-11 16:31:49.130466] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:16:12.517 [2024-07-11 16:31:49.130587] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:16:12.517 [2024-07-11 16:31:49.130786] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.517 pt2 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.517 16:31:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.776 16:31:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:12.776 "name": "raid_bdev1", 00:16:12.776 "uuid": "5d7139eb-f97e-4311-97e5-7e18ac980eff", 00:16:12.776 "strip_size_kb": 0, 00:16:12.776 "state": "online", 00:16:12.776 "raid_level": "raid1", 00:16:12.776 "superblock": true, 00:16:12.776 "num_base_bdevs": 2, 00:16:12.776 "num_base_bdevs_discovered": 2, 00:16:12.776 "num_base_bdevs_operational": 2, 00:16:12.776 "base_bdevs_list": [ 00:16:12.776 { 00:16:12.776 "name": "pt1", 00:16:12.776 "uuid": "254962f0-230b-5ece-925a-409b138e5d8f", 00:16:12.776 "is_configured": true, 00:16:12.776 "data_offset": 2048, 00:16:12.776 "data_size": 63488 00:16:12.776 }, 00:16:12.776 { 00:16:12.776 "name": "pt2", 00:16:12.776 "uuid": "4dfff0a5-4778-5d2f-99ca-00fd2ae3bb67", 00:16:12.776 "is_configured": true, 00:16:12.776 "data_offset": 2048, 00:16:12.776 "data_size": 63488 00:16:12.776 } 00:16:12.776 ] 00:16:12.776 }' 00:16:12.776 16:31:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:12.776 16:31:49 -- common/autotest_common.sh@10 -- # set +x 00:16:13.343 16:31:50 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:13.343 16:31:50 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:13.602 [2024-07-11 16:31:50.212310] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@430 -- # '[' 5d7139eb-f97e-4311-97e5-7e18ac980eff '!=' 5d7139eb-f97e-4311-97e5-7e18ac980eff ']' 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:13.602 [2024-07-11 16:31:50.388197] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.602 16:31:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.859 16:31:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.859 "name": "raid_bdev1", 00:16:13.859 "uuid": "5d7139eb-f97e-4311-97e5-7e18ac980eff", 00:16:13.859 "strip_size_kb": 0, 00:16:13.859 "state": "online", 00:16:13.859 "raid_level": "raid1", 00:16:13.859 "superblock": true, 00:16:13.859 "num_base_bdevs": 2, 00:16:13.859 "num_base_bdevs_discovered": 1, 00:16:13.859 "num_base_bdevs_operational": 1, 00:16:13.859 "base_bdevs_list": [ 00:16:13.859 { 00:16:13.859 "name": null, 00:16:13.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.859 "is_configured": false, 00:16:13.859 "data_offset": 2048, 00:16:13.859 "data_size": 63488 00:16:13.859 }, 00:16:13.859 { 00:16:13.859 "name": "pt2", 00:16:13.859 "uuid": "4dfff0a5-4778-5d2f-99ca-00fd2ae3bb67", 00:16:13.859 "is_configured": true, 00:16:13.859 "data_offset": 2048, 00:16:13.859 "data_size": 63488 00:16:13.859 } 00:16:13.859 ] 00:16:13.859 }' 00:16:13.859 16:31:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.859 16:31:50 -- common/autotest_common.sh@10 -- # set +x 00:16:14.794 16:31:51 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:14.794 [2024-07-11 16:31:51.532500] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.794 [2024-07-11 16:31:51.532684] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.794 [2024-07-11 16:31:51.532855] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.794 [2024-07-11 16:31:51.533046] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.794 [2024-07-11 16:31:51.533189] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:16:14.794 16:31:51 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.794 16:31:51 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:15.052 16:31:51 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:15.052 16:31:51 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:15.052 16:31:51 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:15.052 16:31:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:15.052 16:31:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:15.310 16:31:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:15.310 16:31:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:15.310 16:31:51 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:15.310 16:31:51 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:15.310 16:31:51 -- bdev/bdev_raid.sh@462 -- # i=1 00:16:15.310 16:31:51 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.569 [2024-07-11 16:31:52.152625] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.569 [2024-07-11 16:31:52.152828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.569 [2024-07-11 16:31:52.152888] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:15.569 [2024-07-11 16:31:52.153137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.569 [2024-07-11 16:31:52.155142] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.569 [2024-07-11 16:31:52.155313] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.569 [2024-07-11 16:31:52.155516] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:15.569 [2024-07-11 16:31:52.155656] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:15.569 [2024-07-11 16:31:52.155858] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:16:15.569 [2024-07-11 16:31:52.155979] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:15.569 [2024-07-11 16:31:52.156104] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:16:15.569 [2024-07-11 16:31:52.156496] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:16:15.569 [2024-07-11 16:31:52.156615] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:16:15.569 [2024-07-11 16:31:52.156870] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.569 pt2 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.569 16:31:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.827 16:31:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:15.827 "name": "raid_bdev1", 00:16:15.827 "uuid": "5d7139eb-f97e-4311-97e5-7e18ac980eff", 00:16:15.827 "strip_size_kb": 0, 00:16:15.827 "state": "online", 00:16:15.827 "raid_level": "raid1", 00:16:15.827 "superblock": true, 00:16:15.827 "num_base_bdevs": 2, 00:16:15.827 "num_base_bdevs_discovered": 1, 00:16:15.827 "num_base_bdevs_operational": 1, 00:16:15.827 "base_bdevs_list": [ 00:16:15.827 { 00:16:15.827 "name": null, 00:16:15.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.827 "is_configured": false, 00:16:15.827 "data_offset": 2048, 00:16:15.827 "data_size": 63488 00:16:15.827 }, 00:16:15.827 { 00:16:15.827 "name": "pt2", 00:16:15.827 "uuid": "4dfff0a5-4778-5d2f-99ca-00fd2ae3bb67", 00:16:15.827 "is_configured": true, 00:16:15.827 "data_offset": 2048, 00:16:15.827 "data_size": 63488 00:16:15.827 } 00:16:15.827 ] 00:16:15.827 }' 00:16:15.827 16:31:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:15.827 16:31:52 -- common/autotest_common.sh@10 -- # set +x 00:16:16.394 16:31:53 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:16:16.394 16:31:53 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:16.394 16:31:53 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:16:16.651 [2024-07-11 16:31:53.313223] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.651 16:31:53 -- bdev/bdev_raid.sh@506 -- # '[' 5d7139eb-f97e-4311-97e5-7e18ac980eff '!=' 5d7139eb-f97e-4311-97e5-7e18ac980eff ']' 00:16:16.651 16:31:53 -- bdev/bdev_raid.sh@511 -- # killprocess 117294 00:16:16.651 16:31:53 -- common/autotest_common.sh@926 -- # '[' -z 117294 ']' 00:16:16.651 16:31:53 -- common/autotest_common.sh@930 -- # kill -0 117294 00:16:16.651 16:31:53 -- common/autotest_common.sh@931 -- # uname 00:16:16.651 16:31:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:16.651 16:31:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117294 00:16:16.651 killing process with pid 117294 00:16:16.651 16:31:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:16.651 16:31:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:16.651 16:31:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117294' 00:16:16.651 16:31:53 -- common/autotest_common.sh@945 -- # kill 117294 00:16:16.651 16:31:53 -- common/autotest_common.sh@950 -- # wait 117294 00:16:16.651 [2024-07-11 16:31:53.346555] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.651 [2024-07-11 16:31:53.346689] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:16.651 [2024-07-11 16:31:53.346780] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:16.651 [2024-07-11 16:31:53.346895] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:16:16.909 [2024-07-11 16:31:53.475816] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.841 ************************************ 00:16:17.841 END TEST raid_superblock_test 00:16:17.841 ************************************ 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:17.841 00:16:17.841 real 0m11.245s 00:16:17.841 user 0m20.221s 00:16:17.841 sys 0m1.186s 00:16:17.841 16:31:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:17.841 16:31:54 -- common/autotest_common.sh@10 -- # set +x 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:17.841 16:31:54 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:17.841 16:31:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:17.841 16:31:54 -- common/autotest_common.sh@10 -- # set +x 00:16:17.841 ************************************ 00:16:17.841 START TEST raid_state_function_test 00:16:17.841 ************************************ 00:16:17.841 16:31:54 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@226 -- # raid_pid=117674 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:17.841 Process raid pid: 117674 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117674' 00:16:17.841 16:31:54 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117674 /var/tmp/spdk-raid.sock 00:16:17.841 16:31:54 -- common/autotest_common.sh@819 -- # '[' -z 117674 ']' 00:16:17.841 16:31:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:17.841 16:31:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.841 16:31:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:17.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:17.841 16:31:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.841 16:31:54 -- common/autotest_common.sh@10 -- # set +x 00:16:17.841 [2024-07-11 16:31:54.507446] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:17.841 [2024-07-11 16:31:54.507823] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.099 [2024-07-11 16:31:54.660406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.099 [2024-07-11 16:31:54.821861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.358 [2024-07-11 16:31:54.993999] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.925 16:31:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:18.925 16:31:55 -- common/autotest_common.sh@852 -- # return 0 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:18.925 [2024-07-11 16:31:55.628635] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.925 [2024-07-11 16:31:55.628854] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.925 [2024-07-11 16:31:55.629016] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.925 [2024-07-11 16:31:55.629131] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.925 [2024-07-11 16:31:55.629217] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.925 [2024-07-11 16:31:55.629312] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.925 16:31:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.184 16:31:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:19.184 "name": "Existed_Raid", 00:16:19.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.184 "strip_size_kb": 64, 00:16:19.184 "state": "configuring", 00:16:19.184 "raid_level": "raid0", 00:16:19.184 "superblock": false, 00:16:19.184 "num_base_bdevs": 3, 00:16:19.184 "num_base_bdevs_discovered": 0, 00:16:19.184 "num_base_bdevs_operational": 3, 00:16:19.184 "base_bdevs_list": [ 00:16:19.184 { 00:16:19.184 "name": "BaseBdev1", 00:16:19.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.184 "is_configured": false, 00:16:19.184 "data_offset": 0, 00:16:19.184 "data_size": 0 00:16:19.184 }, 00:16:19.184 { 00:16:19.184 "name": "BaseBdev2", 00:16:19.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.184 "is_configured": false, 00:16:19.184 "data_offset": 0, 00:16:19.184 "data_size": 0 00:16:19.184 }, 00:16:19.184 { 00:16:19.184 "name": "BaseBdev3", 00:16:19.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:19.184 "is_configured": false, 00:16:19.184 "data_offset": 0, 00:16:19.184 "data_size": 0 00:16:19.184 } 00:16:19.184 ] 00:16:19.184 }' 00:16:19.184 16:31:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:19.184 16:31:55 -- common/autotest_common.sh@10 -- # set +x 00:16:19.751 16:31:56 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:20.009 [2024-07-11 16:31:56.708726] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.009 [2024-07-11 16:31:56.708864] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:20.009 16:31:56 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:20.267 [2024-07-11 16:31:56.940771] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:20.267 [2024-07-11 16:31:56.940957] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:20.268 [2024-07-11 16:31:56.941053] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:20.268 [2024-07-11 16:31:56.941103] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:20.268 [2024-07-11 16:31:56.941222] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:20.268 [2024-07-11 16:31:56.941306] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:20.268 16:31:56 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:20.525 [2024-07-11 16:31:57.137813] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:20.525 BaseBdev1 00:16:20.525 16:31:57 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:20.525 16:31:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:20.525 16:31:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:20.525 16:31:57 -- common/autotest_common.sh@889 -- # local i 00:16:20.525 16:31:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:20.525 16:31:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:20.525 16:31:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:20.783 16:31:57 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:20.783 [ 00:16:20.783 { 00:16:20.783 "name": "BaseBdev1", 00:16:20.783 "aliases": [ 00:16:20.783 "ca51d9f0-0abf-4248-bf8f-6bb44335fe9b" 00:16:20.783 ], 00:16:20.783 "product_name": "Malloc disk", 00:16:20.783 "block_size": 512, 00:16:20.783 "num_blocks": 65536, 00:16:20.783 "uuid": "ca51d9f0-0abf-4248-bf8f-6bb44335fe9b", 00:16:20.783 "assigned_rate_limits": { 00:16:20.783 "rw_ios_per_sec": 0, 00:16:20.783 "rw_mbytes_per_sec": 0, 00:16:20.783 "r_mbytes_per_sec": 0, 00:16:20.783 "w_mbytes_per_sec": 0 00:16:20.783 }, 00:16:20.783 "claimed": true, 00:16:20.783 "claim_type": "exclusive_write", 00:16:20.783 "zoned": false, 00:16:20.783 "supported_io_types": { 00:16:20.783 "read": true, 00:16:20.783 "write": true, 00:16:20.783 "unmap": true, 00:16:20.783 "write_zeroes": true, 00:16:20.783 "flush": true, 00:16:20.783 "reset": true, 00:16:20.783 "compare": false, 00:16:20.783 "compare_and_write": false, 00:16:20.783 "abort": true, 00:16:20.783 "nvme_admin": false, 00:16:20.783 "nvme_io": false 00:16:20.783 }, 00:16:20.783 "memory_domains": [ 00:16:20.783 { 00:16:20.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.783 "dma_device_type": 2 00:16:20.783 } 00:16:20.783 ], 00:16:20.783 "driver_specific": {} 00:16:20.783 } 00:16:20.783 ] 00:16:20.783 16:31:57 -- common/autotest_common.sh@895 -- # return 0 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.783 16:31:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.041 16:31:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:21.041 "name": "Existed_Raid", 00:16:21.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.041 "strip_size_kb": 64, 00:16:21.041 "state": "configuring", 00:16:21.041 "raid_level": "raid0", 00:16:21.041 "superblock": false, 00:16:21.041 "num_base_bdevs": 3, 00:16:21.041 "num_base_bdevs_discovered": 1, 00:16:21.041 "num_base_bdevs_operational": 3, 00:16:21.041 "base_bdevs_list": [ 00:16:21.041 { 00:16:21.041 "name": "BaseBdev1", 00:16:21.041 "uuid": "ca51d9f0-0abf-4248-bf8f-6bb44335fe9b", 00:16:21.041 "is_configured": true, 00:16:21.041 "data_offset": 0, 00:16:21.041 "data_size": 65536 00:16:21.041 }, 00:16:21.041 { 00:16:21.041 "name": "BaseBdev2", 00:16:21.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.041 "is_configured": false, 00:16:21.041 "data_offset": 0, 00:16:21.041 "data_size": 0 00:16:21.041 }, 00:16:21.041 { 00:16:21.041 "name": "BaseBdev3", 00:16:21.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.041 "is_configured": false, 00:16:21.041 "data_offset": 0, 00:16:21.041 "data_size": 0 00:16:21.041 } 00:16:21.041 ] 00:16:21.041 }' 00:16:21.041 16:31:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:21.041 16:31:57 -- common/autotest_common.sh@10 -- # set +x 00:16:21.607 16:31:58 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:21.865 [2024-07-11 16:31:58.574370] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.865 [2024-07-11 16:31:58.574729] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:21.865 16:31:58 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:21.865 16:31:58 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:22.123 [2024-07-11 16:31:58.770424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.123 [2024-07-11 16:31:58.772197] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:22.123 [2024-07-11 16:31:58.772388] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:22.123 [2024-07-11 16:31:58.772505] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:22.123 [2024-07-11 16:31:58.772619] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.123 16:31:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.382 16:31:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.382 "name": "Existed_Raid", 00:16:22.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.382 "strip_size_kb": 64, 00:16:22.382 "state": "configuring", 00:16:22.382 "raid_level": "raid0", 00:16:22.382 "superblock": false, 00:16:22.382 "num_base_bdevs": 3, 00:16:22.382 "num_base_bdevs_discovered": 1, 00:16:22.382 "num_base_bdevs_operational": 3, 00:16:22.382 "base_bdevs_list": [ 00:16:22.382 { 00:16:22.382 "name": "BaseBdev1", 00:16:22.382 "uuid": "ca51d9f0-0abf-4248-bf8f-6bb44335fe9b", 00:16:22.382 "is_configured": true, 00:16:22.382 "data_offset": 0, 00:16:22.382 "data_size": 65536 00:16:22.382 }, 00:16:22.382 { 00:16:22.382 "name": "BaseBdev2", 00:16:22.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.382 "is_configured": false, 00:16:22.382 "data_offset": 0, 00:16:22.382 "data_size": 0 00:16:22.382 }, 00:16:22.382 { 00:16:22.382 "name": "BaseBdev3", 00:16:22.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.382 "is_configured": false, 00:16:22.382 "data_offset": 0, 00:16:22.382 "data_size": 0 00:16:22.382 } 00:16:22.382 ] 00:16:22.382 }' 00:16:22.382 16:31:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.382 16:31:58 -- common/autotest_common.sh@10 -- # set +x 00:16:22.949 16:31:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:23.208 [2024-07-11 16:31:59.844492] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.208 BaseBdev2 00:16:23.208 16:31:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:23.208 16:31:59 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:23.208 16:31:59 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:23.208 16:31:59 -- common/autotest_common.sh@889 -- # local i 00:16:23.208 16:31:59 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:23.208 16:31:59 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:23.208 16:31:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:23.465 16:32:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:23.723 [ 00:16:23.723 { 00:16:23.723 "name": "BaseBdev2", 00:16:23.723 "aliases": [ 00:16:23.723 "8fab4f9b-f921-4648-a97e-3429adf05dd7" 00:16:23.723 ], 00:16:23.723 "product_name": "Malloc disk", 00:16:23.723 "block_size": 512, 00:16:23.723 "num_blocks": 65536, 00:16:23.723 "uuid": "8fab4f9b-f921-4648-a97e-3429adf05dd7", 00:16:23.723 "assigned_rate_limits": { 00:16:23.723 "rw_ios_per_sec": 0, 00:16:23.723 "rw_mbytes_per_sec": 0, 00:16:23.723 "r_mbytes_per_sec": 0, 00:16:23.723 "w_mbytes_per_sec": 0 00:16:23.723 }, 00:16:23.723 "claimed": true, 00:16:23.723 "claim_type": "exclusive_write", 00:16:23.723 "zoned": false, 00:16:23.723 "supported_io_types": { 00:16:23.723 "read": true, 00:16:23.723 "write": true, 00:16:23.723 "unmap": true, 00:16:23.723 "write_zeroes": true, 00:16:23.723 "flush": true, 00:16:23.723 "reset": true, 00:16:23.723 "compare": false, 00:16:23.723 "compare_and_write": false, 00:16:23.723 "abort": true, 00:16:23.723 "nvme_admin": false, 00:16:23.723 "nvme_io": false 00:16:23.723 }, 00:16:23.723 "memory_domains": [ 00:16:23.723 { 00:16:23.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.723 "dma_device_type": 2 00:16:23.723 } 00:16:23.723 ], 00:16:23.723 "driver_specific": {} 00:16:23.723 } 00:16:23.723 ] 00:16:23.723 16:32:00 -- common/autotest_common.sh@895 -- # return 0 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.723 16:32:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.724 "name": "Existed_Raid", 00:16:23.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.724 "strip_size_kb": 64, 00:16:23.724 "state": "configuring", 00:16:23.724 "raid_level": "raid0", 00:16:23.724 "superblock": false, 00:16:23.724 "num_base_bdevs": 3, 00:16:23.724 "num_base_bdevs_discovered": 2, 00:16:23.724 "num_base_bdevs_operational": 3, 00:16:23.724 "base_bdevs_list": [ 00:16:23.724 { 00:16:23.724 "name": "BaseBdev1", 00:16:23.724 "uuid": "ca51d9f0-0abf-4248-bf8f-6bb44335fe9b", 00:16:23.724 "is_configured": true, 00:16:23.724 "data_offset": 0, 00:16:23.724 "data_size": 65536 00:16:23.724 }, 00:16:23.724 { 00:16:23.724 "name": "BaseBdev2", 00:16:23.724 "uuid": "8fab4f9b-f921-4648-a97e-3429adf05dd7", 00:16:23.724 "is_configured": true, 00:16:23.724 "data_offset": 0, 00:16:23.724 "data_size": 65536 00:16:23.724 }, 00:16:23.724 { 00:16:23.724 "name": "BaseBdev3", 00:16:23.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.724 "is_configured": false, 00:16:23.724 "data_offset": 0, 00:16:23.724 "data_size": 0 00:16:23.724 } 00:16:23.724 ] 00:16:23.724 }' 00:16:23.724 16:32:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.724 16:32:00 -- common/autotest_common.sh@10 -- # set +x 00:16:24.658 16:32:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:24.916 [2024-07-11 16:32:01.492657] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.916 [2024-07-11 16:32:01.492876] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:24.916 [2024-07-11 16:32:01.492915] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:24.916 [2024-07-11 16:32:01.493154] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:24.916 [2024-07-11 16:32:01.493690] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:24.916 [2024-07-11 16:32:01.493822] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:24.916 [2024-07-11 16:32:01.494180] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.916 BaseBdev3 00:16:24.916 16:32:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:24.916 16:32:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:24.916 16:32:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:24.916 16:32:01 -- common/autotest_common.sh@889 -- # local i 00:16:24.916 16:32:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:24.916 16:32:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:24.916 16:32:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:25.174 16:32:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:25.174 [ 00:16:25.174 { 00:16:25.174 "name": "BaseBdev3", 00:16:25.174 "aliases": [ 00:16:25.174 "fd96febd-f06d-4d12-8e91-2e3bd0b2ff1b" 00:16:25.174 ], 00:16:25.174 "product_name": "Malloc disk", 00:16:25.174 "block_size": 512, 00:16:25.174 "num_blocks": 65536, 00:16:25.174 "uuid": "fd96febd-f06d-4d12-8e91-2e3bd0b2ff1b", 00:16:25.174 "assigned_rate_limits": { 00:16:25.174 "rw_ios_per_sec": 0, 00:16:25.174 "rw_mbytes_per_sec": 0, 00:16:25.174 "r_mbytes_per_sec": 0, 00:16:25.174 "w_mbytes_per_sec": 0 00:16:25.174 }, 00:16:25.174 "claimed": true, 00:16:25.174 "claim_type": "exclusive_write", 00:16:25.174 "zoned": false, 00:16:25.174 "supported_io_types": { 00:16:25.174 "read": true, 00:16:25.174 "write": true, 00:16:25.174 "unmap": true, 00:16:25.174 "write_zeroes": true, 00:16:25.174 "flush": true, 00:16:25.174 "reset": true, 00:16:25.174 "compare": false, 00:16:25.174 "compare_and_write": false, 00:16:25.174 "abort": true, 00:16:25.174 "nvme_admin": false, 00:16:25.174 "nvme_io": false 00:16:25.174 }, 00:16:25.174 "memory_domains": [ 00:16:25.174 { 00:16:25.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.174 "dma_device_type": 2 00:16:25.174 } 00:16:25.174 ], 00:16:25.174 "driver_specific": {} 00:16:25.174 } 00:16:25.174 ] 00:16:25.432 16:32:01 -- common/autotest_common.sh@895 -- # return 0 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.432 16:32:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.432 16:32:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.432 "name": "Existed_Raid", 00:16:25.432 "uuid": "e1ea6bbb-53e6-4f08-bf43-610b2342c3b4", 00:16:25.432 "strip_size_kb": 64, 00:16:25.432 "state": "online", 00:16:25.432 "raid_level": "raid0", 00:16:25.432 "superblock": false, 00:16:25.432 "num_base_bdevs": 3, 00:16:25.432 "num_base_bdevs_discovered": 3, 00:16:25.432 "num_base_bdevs_operational": 3, 00:16:25.432 "base_bdevs_list": [ 00:16:25.432 { 00:16:25.432 "name": "BaseBdev1", 00:16:25.432 "uuid": "ca51d9f0-0abf-4248-bf8f-6bb44335fe9b", 00:16:25.432 "is_configured": true, 00:16:25.432 "data_offset": 0, 00:16:25.432 "data_size": 65536 00:16:25.432 }, 00:16:25.432 { 00:16:25.432 "name": "BaseBdev2", 00:16:25.433 "uuid": "8fab4f9b-f921-4648-a97e-3429adf05dd7", 00:16:25.433 "is_configured": true, 00:16:25.433 "data_offset": 0, 00:16:25.433 "data_size": 65536 00:16:25.433 }, 00:16:25.433 { 00:16:25.433 "name": "BaseBdev3", 00:16:25.433 "uuid": "fd96febd-f06d-4d12-8e91-2e3bd0b2ff1b", 00:16:25.433 "is_configured": true, 00:16:25.433 "data_offset": 0, 00:16:25.433 "data_size": 65536 00:16:25.433 } 00:16:25.433 ] 00:16:25.433 }' 00:16:25.433 16:32:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.433 16:32:02 -- common/autotest_common.sh@10 -- # set +x 00:16:26.370 16:32:02 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:26.370 [2024-07-11 16:32:03.129099] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.370 [2024-07-11 16:32:03.129281] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.370 [2024-07-11 16:32:03.129452] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.630 16:32:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.892 16:32:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:26.892 "name": "Existed_Raid", 00:16:26.892 "uuid": "e1ea6bbb-53e6-4f08-bf43-610b2342c3b4", 00:16:26.892 "strip_size_kb": 64, 00:16:26.892 "state": "offline", 00:16:26.892 "raid_level": "raid0", 00:16:26.892 "superblock": false, 00:16:26.892 "num_base_bdevs": 3, 00:16:26.892 "num_base_bdevs_discovered": 2, 00:16:26.892 "num_base_bdevs_operational": 2, 00:16:26.892 "base_bdevs_list": [ 00:16:26.892 { 00:16:26.892 "name": null, 00:16:26.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.892 "is_configured": false, 00:16:26.892 "data_offset": 0, 00:16:26.892 "data_size": 65536 00:16:26.892 }, 00:16:26.892 { 00:16:26.892 "name": "BaseBdev2", 00:16:26.892 "uuid": "8fab4f9b-f921-4648-a97e-3429adf05dd7", 00:16:26.892 "is_configured": true, 00:16:26.892 "data_offset": 0, 00:16:26.892 "data_size": 65536 00:16:26.892 }, 00:16:26.892 { 00:16:26.892 "name": "BaseBdev3", 00:16:26.892 "uuid": "fd96febd-f06d-4d12-8e91-2e3bd0b2ff1b", 00:16:26.892 "is_configured": true, 00:16:26.892 "data_offset": 0, 00:16:26.892 "data_size": 65536 00:16:26.892 } 00:16:26.892 ] 00:16:26.892 }' 00:16:26.892 16:32:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:26.892 16:32:03 -- common/autotest_common.sh@10 -- # set +x 00:16:27.489 16:32:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:27.489 16:32:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:27.489 16:32:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.489 16:32:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:27.746 16:32:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:27.746 16:32:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:27.746 16:32:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:28.005 [2024-07-11 16:32:04.648915] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:28.005 16:32:04 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:28.005 16:32:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:28.005 16:32:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.005 16:32:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:28.263 16:32:04 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:28.263 16:32:04 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:28.263 16:32:04 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:28.521 [2024-07-11 16:32:05.208391] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:28.521 [2024-07-11 16:32:05.208557] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:28.521 16:32:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:28.521 16:32:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:28.521 16:32:05 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.521 16:32:05 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:28.779 16:32:05 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:28.779 16:32:05 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:28.779 16:32:05 -- bdev/bdev_raid.sh@287 -- # killprocess 117674 00:16:28.779 16:32:05 -- common/autotest_common.sh@926 -- # '[' -z 117674 ']' 00:16:28.779 16:32:05 -- common/autotest_common.sh@930 -- # kill -0 117674 00:16:28.779 16:32:05 -- common/autotest_common.sh@931 -- # uname 00:16:28.779 16:32:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:28.779 16:32:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 117674 00:16:28.779 killing process with pid 117674 00:16:28.779 16:32:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:28.779 16:32:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:28.779 16:32:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 117674' 00:16:28.779 16:32:05 -- common/autotest_common.sh@945 -- # kill 117674 00:16:28.779 16:32:05 -- common/autotest_common.sh@950 -- # wait 117674 00:16:28.779 [2024-07-11 16:32:05.555249] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.779 [2024-07-11 16:32:05.555389] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.713 ************************************ 00:16:29.713 END TEST raid_state_function_test 00:16:29.713 ************************************ 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:29.713 00:16:29.713 real 0m12.008s 00:16:29.713 user 0m21.475s 00:16:29.713 sys 0m1.355s 00:16:29.713 16:32:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.713 16:32:06 -- common/autotest_common.sh@10 -- # set +x 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:29.713 16:32:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:29.713 16:32:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:29.713 16:32:06 -- common/autotest_common.sh@10 -- # set +x 00:16:29.713 ************************************ 00:16:29.713 START TEST raid_state_function_test_sb 00:16:29.713 ************************************ 00:16:29.713 16:32:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:29.713 16:32:06 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:29.971 16:32:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=118073 00:16:29.971 16:32:06 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:29.971 Process raid pid: 118073 00:16:29.971 16:32:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118073' 00:16:29.971 16:32:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118073 /var/tmp/spdk-raid.sock 00:16:29.971 16:32:06 -- common/autotest_common.sh@819 -- # '[' -z 118073 ']' 00:16:29.971 16:32:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:29.971 16:32:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:29.971 16:32:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:29.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:29.971 16:32:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:29.971 16:32:06 -- common/autotest_common.sh@10 -- # set +x 00:16:29.971 [2024-07-11 16:32:06.578892] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:29.971 [2024-07-11 16:32:06.579330] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.971 [2024-07-11 16:32:06.745100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.230 [2024-07-11 16:32:06.900915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.489 [2024-07-11 16:32:07.066090] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.748 16:32:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:30.748 16:32:07 -- common/autotest_common.sh@852 -- # return 0 00:16:30.748 16:32:07 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:31.006 [2024-07-11 16:32:07.658988] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.006 [2024-07-11 16:32:07.659228] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.006 [2024-07-11 16:32:07.659335] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.006 [2024-07-11 16:32:07.659393] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.006 [2024-07-11 16:32:07.659481] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.006 [2024-07-11 16:32:07.659557] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.006 16:32:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:31.006 16:32:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.006 16:32:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.006 16:32:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:31.006 16:32:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:31.006 16:32:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.006 16:32:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.006 16:32:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.006 16:32:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.006 16:32:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.007 16:32:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.007 16:32:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.265 16:32:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.265 "name": "Existed_Raid", 00:16:31.265 "uuid": "20f72520-ca5a-49d3-8029-df9030b5a92c", 00:16:31.265 "strip_size_kb": 64, 00:16:31.265 "state": "configuring", 00:16:31.265 "raid_level": "raid0", 00:16:31.265 "superblock": true, 00:16:31.265 "num_base_bdevs": 3, 00:16:31.265 "num_base_bdevs_discovered": 0, 00:16:31.265 "num_base_bdevs_operational": 3, 00:16:31.265 "base_bdevs_list": [ 00:16:31.265 { 00:16:31.265 "name": "BaseBdev1", 00:16:31.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.265 "is_configured": false, 00:16:31.265 "data_offset": 0, 00:16:31.265 "data_size": 0 00:16:31.265 }, 00:16:31.265 { 00:16:31.265 "name": "BaseBdev2", 00:16:31.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.265 "is_configured": false, 00:16:31.265 "data_offset": 0, 00:16:31.265 "data_size": 0 00:16:31.265 }, 00:16:31.265 { 00:16:31.265 "name": "BaseBdev3", 00:16:31.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.265 "is_configured": false, 00:16:31.265 "data_offset": 0, 00:16:31.265 "data_size": 0 00:16:31.265 } 00:16:31.265 ] 00:16:31.265 }' 00:16:31.265 16:32:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.265 16:32:07 -- common/autotest_common.sh@10 -- # set +x 00:16:31.831 16:32:08 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:32.090 [2024-07-11 16:32:08.835058] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:32.090 [2024-07-11 16:32:08.835212] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:32.090 16:32:08 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:32.348 [2024-07-11 16:32:09.019141] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:32.348 [2024-07-11 16:32:09.019318] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:32.348 [2024-07-11 16:32:09.019410] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:32.348 [2024-07-11 16:32:09.019550] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:32.348 [2024-07-11 16:32:09.019641] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:32.348 [2024-07-11 16:32:09.019706] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:32.348 16:32:09 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:32.606 [2024-07-11 16:32:09.232133] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.606 BaseBdev1 00:16:32.606 16:32:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:32.606 16:32:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:32.606 16:32:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:32.606 16:32:09 -- common/autotest_common.sh@889 -- # local i 00:16:32.606 16:32:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:32.606 16:32:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:32.606 16:32:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:32.864 16:32:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:32.864 [ 00:16:32.864 { 00:16:32.864 "name": "BaseBdev1", 00:16:32.864 "aliases": [ 00:16:32.864 "ce9c2163-617c-4887-a7c0-360e649e3e10" 00:16:32.864 ], 00:16:32.864 "product_name": "Malloc disk", 00:16:32.864 "block_size": 512, 00:16:32.864 "num_blocks": 65536, 00:16:32.864 "uuid": "ce9c2163-617c-4887-a7c0-360e649e3e10", 00:16:32.864 "assigned_rate_limits": { 00:16:32.864 "rw_ios_per_sec": 0, 00:16:32.864 "rw_mbytes_per_sec": 0, 00:16:32.864 "r_mbytes_per_sec": 0, 00:16:32.864 "w_mbytes_per_sec": 0 00:16:32.864 }, 00:16:32.864 "claimed": true, 00:16:32.864 "claim_type": "exclusive_write", 00:16:32.864 "zoned": false, 00:16:32.864 "supported_io_types": { 00:16:32.864 "read": true, 00:16:32.864 "write": true, 00:16:32.864 "unmap": true, 00:16:32.864 "write_zeroes": true, 00:16:32.864 "flush": true, 00:16:32.864 "reset": true, 00:16:32.864 "compare": false, 00:16:32.864 "compare_and_write": false, 00:16:32.864 "abort": true, 00:16:32.864 "nvme_admin": false, 00:16:32.864 "nvme_io": false 00:16:32.864 }, 00:16:32.864 "memory_domains": [ 00:16:32.864 { 00:16:32.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.864 "dma_device_type": 2 00:16:32.864 } 00:16:32.864 ], 00:16:32.864 "driver_specific": {} 00:16:32.864 } 00:16:32.864 ] 00:16:32.864 16:32:09 -- common/autotest_common.sh@895 -- # return 0 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.864 16:32:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.122 16:32:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.122 "name": "Existed_Raid", 00:16:33.122 "uuid": "ee7e1678-9bd0-46ec-b48e-02d97a09eafc", 00:16:33.122 "strip_size_kb": 64, 00:16:33.122 "state": "configuring", 00:16:33.122 "raid_level": "raid0", 00:16:33.122 "superblock": true, 00:16:33.122 "num_base_bdevs": 3, 00:16:33.122 "num_base_bdevs_discovered": 1, 00:16:33.122 "num_base_bdevs_operational": 3, 00:16:33.122 "base_bdevs_list": [ 00:16:33.122 { 00:16:33.122 "name": "BaseBdev1", 00:16:33.122 "uuid": "ce9c2163-617c-4887-a7c0-360e649e3e10", 00:16:33.122 "is_configured": true, 00:16:33.122 "data_offset": 2048, 00:16:33.122 "data_size": 63488 00:16:33.122 }, 00:16:33.122 { 00:16:33.122 "name": "BaseBdev2", 00:16:33.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.122 "is_configured": false, 00:16:33.122 "data_offset": 0, 00:16:33.122 "data_size": 0 00:16:33.122 }, 00:16:33.122 { 00:16:33.122 "name": "BaseBdev3", 00:16:33.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.122 "is_configured": false, 00:16:33.122 "data_offset": 0, 00:16:33.122 "data_size": 0 00:16:33.122 } 00:16:33.122 ] 00:16:33.122 }' 00:16:33.122 16:32:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.122 16:32:09 -- common/autotest_common.sh@10 -- # set +x 00:16:33.688 16:32:10 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:33.946 [2024-07-11 16:32:10.664412] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.946 [2024-07-11 16:32:10.664593] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:33.946 16:32:10 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:33.946 16:32:10 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:34.204 16:32:10 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:34.462 BaseBdev1 00:16:34.720 16:32:11 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:34.720 16:32:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:34.720 16:32:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:34.720 16:32:11 -- common/autotest_common.sh@889 -- # local i 00:16:34.720 16:32:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:34.720 16:32:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:34.720 16:32:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:34.720 16:32:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:34.978 [ 00:16:34.978 { 00:16:34.978 "name": "BaseBdev1", 00:16:34.978 "aliases": [ 00:16:34.978 "21b2ddfd-44f5-41b1-8c82-648015331060" 00:16:34.978 ], 00:16:34.978 "product_name": "Malloc disk", 00:16:34.978 "block_size": 512, 00:16:34.978 "num_blocks": 65536, 00:16:34.978 "uuid": "21b2ddfd-44f5-41b1-8c82-648015331060", 00:16:34.978 "assigned_rate_limits": { 00:16:34.978 "rw_ios_per_sec": 0, 00:16:34.978 "rw_mbytes_per_sec": 0, 00:16:34.978 "r_mbytes_per_sec": 0, 00:16:34.978 "w_mbytes_per_sec": 0 00:16:34.978 }, 00:16:34.978 "claimed": false, 00:16:34.978 "zoned": false, 00:16:34.978 "supported_io_types": { 00:16:34.978 "read": true, 00:16:34.978 "write": true, 00:16:34.978 "unmap": true, 00:16:34.978 "write_zeroes": true, 00:16:34.978 "flush": true, 00:16:34.978 "reset": true, 00:16:34.978 "compare": false, 00:16:34.978 "compare_and_write": false, 00:16:34.978 "abort": true, 00:16:34.978 "nvme_admin": false, 00:16:34.978 "nvme_io": false 00:16:34.978 }, 00:16:34.978 "memory_domains": [ 00:16:34.978 { 00:16:34.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:34.978 "dma_device_type": 2 00:16:34.978 } 00:16:34.978 ], 00:16:34.978 "driver_specific": {} 00:16:34.978 } 00:16:34.978 ] 00:16:34.978 16:32:11 -- common/autotest_common.sh@895 -- # return 0 00:16:34.978 16:32:11 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:35.236 [2024-07-11 16:32:11.860807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.236 [2024-07-11 16:32:11.862461] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.236 [2024-07-11 16:32:11.862641] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.236 [2024-07-11 16:32:11.862737] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:35.236 [2024-07-11 16:32:11.862794] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:35.236 16:32:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:35.236 16:32:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:35.236 16:32:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:35.236 16:32:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:35.236 16:32:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:35.236 16:32:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:35.236 16:32:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:35.236 16:32:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:35.236 16:32:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:35.237 16:32:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:35.237 16:32:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:35.237 16:32:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:35.237 16:32:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.237 16:32:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.494 16:32:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.494 "name": "Existed_Raid", 00:16:35.494 "uuid": "9b702f8c-35f5-43fe-b464-c2804e0ddce5", 00:16:35.494 "strip_size_kb": 64, 00:16:35.494 "state": "configuring", 00:16:35.494 "raid_level": "raid0", 00:16:35.494 "superblock": true, 00:16:35.494 "num_base_bdevs": 3, 00:16:35.494 "num_base_bdevs_discovered": 1, 00:16:35.494 "num_base_bdevs_operational": 3, 00:16:35.494 "base_bdevs_list": [ 00:16:35.494 { 00:16:35.494 "name": "BaseBdev1", 00:16:35.494 "uuid": "21b2ddfd-44f5-41b1-8c82-648015331060", 00:16:35.494 "is_configured": true, 00:16:35.494 "data_offset": 2048, 00:16:35.494 "data_size": 63488 00:16:35.494 }, 00:16:35.494 { 00:16:35.494 "name": "BaseBdev2", 00:16:35.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.494 "is_configured": false, 00:16:35.494 "data_offset": 0, 00:16:35.494 "data_size": 0 00:16:35.494 }, 00:16:35.494 { 00:16:35.494 "name": "BaseBdev3", 00:16:35.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.494 "is_configured": false, 00:16:35.494 "data_offset": 0, 00:16:35.494 "data_size": 0 00:16:35.494 } 00:16:35.494 ] 00:16:35.494 }' 00:16:35.494 16:32:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.494 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:16:36.060 16:32:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:36.318 [2024-07-11 16:32:12.914290] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:36.318 BaseBdev2 00:16:36.318 16:32:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:36.318 16:32:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:36.318 16:32:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:36.318 16:32:12 -- common/autotest_common.sh@889 -- # local i 00:16:36.318 16:32:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:36.318 16:32:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:36.318 16:32:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.318 16:32:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:36.575 [ 00:16:36.575 { 00:16:36.575 "name": "BaseBdev2", 00:16:36.575 "aliases": [ 00:16:36.575 "e8410154-9734-4166-9514-60b61774db23" 00:16:36.575 ], 00:16:36.575 "product_name": "Malloc disk", 00:16:36.575 "block_size": 512, 00:16:36.575 "num_blocks": 65536, 00:16:36.575 "uuid": "e8410154-9734-4166-9514-60b61774db23", 00:16:36.575 "assigned_rate_limits": { 00:16:36.575 "rw_ios_per_sec": 0, 00:16:36.575 "rw_mbytes_per_sec": 0, 00:16:36.575 "r_mbytes_per_sec": 0, 00:16:36.575 "w_mbytes_per_sec": 0 00:16:36.575 }, 00:16:36.575 "claimed": true, 00:16:36.575 "claim_type": "exclusive_write", 00:16:36.575 "zoned": false, 00:16:36.575 "supported_io_types": { 00:16:36.575 "read": true, 00:16:36.575 "write": true, 00:16:36.575 "unmap": true, 00:16:36.575 "write_zeroes": true, 00:16:36.575 "flush": true, 00:16:36.575 "reset": true, 00:16:36.575 "compare": false, 00:16:36.575 "compare_and_write": false, 00:16:36.575 "abort": true, 00:16:36.575 "nvme_admin": false, 00:16:36.575 "nvme_io": false 00:16:36.575 }, 00:16:36.575 "memory_domains": [ 00:16:36.575 { 00:16:36.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.575 "dma_device_type": 2 00:16:36.575 } 00:16:36.575 ], 00:16:36.575 "driver_specific": {} 00:16:36.575 } 00:16:36.575 ] 00:16:36.575 16:32:13 -- common/autotest_common.sh@895 -- # return 0 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.575 16:32:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.833 16:32:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:36.833 "name": "Existed_Raid", 00:16:36.833 "uuid": "9b702f8c-35f5-43fe-b464-c2804e0ddce5", 00:16:36.833 "strip_size_kb": 64, 00:16:36.833 "state": "configuring", 00:16:36.833 "raid_level": "raid0", 00:16:36.833 "superblock": true, 00:16:36.833 "num_base_bdevs": 3, 00:16:36.833 "num_base_bdevs_discovered": 2, 00:16:36.833 "num_base_bdevs_operational": 3, 00:16:36.833 "base_bdevs_list": [ 00:16:36.833 { 00:16:36.833 "name": "BaseBdev1", 00:16:36.833 "uuid": "21b2ddfd-44f5-41b1-8c82-648015331060", 00:16:36.833 "is_configured": true, 00:16:36.833 "data_offset": 2048, 00:16:36.833 "data_size": 63488 00:16:36.833 }, 00:16:36.833 { 00:16:36.833 "name": "BaseBdev2", 00:16:36.833 "uuid": "e8410154-9734-4166-9514-60b61774db23", 00:16:36.833 "is_configured": true, 00:16:36.833 "data_offset": 2048, 00:16:36.833 "data_size": 63488 00:16:36.833 }, 00:16:36.833 { 00:16:36.833 "name": "BaseBdev3", 00:16:36.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.833 "is_configured": false, 00:16:36.833 "data_offset": 0, 00:16:36.833 "data_size": 0 00:16:36.833 } 00:16:36.833 ] 00:16:36.833 }' 00:16:36.833 16:32:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:36.833 16:32:13 -- common/autotest_common.sh@10 -- # set +x 00:16:37.778 16:32:14 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:37.778 [2024-07-11 16:32:14.440015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.778 [2024-07-11 16:32:14.440382] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:37.778 BaseBdev3 00:16:37.778 [2024-07-11 16:32:14.440867] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:37.778 [2024-07-11 16:32:14.441142] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:37.778 [2024-07-11 16:32:14.446012] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:37.778 [2024-07-11 16:32:14.446317] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:37.778 [2024-07-11 16:32:14.447023] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.778 16:32:14 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:37.778 16:32:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:37.778 16:32:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:37.778 16:32:14 -- common/autotest_common.sh@889 -- # local i 00:16:37.778 16:32:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:37.778 16:32:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:37.778 16:32:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:38.045 16:32:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:38.303 [ 00:16:38.303 { 00:16:38.303 "name": "BaseBdev3", 00:16:38.303 "aliases": [ 00:16:38.303 "e87977f4-641e-4a94-8249-48fdabcf54a7" 00:16:38.303 ], 00:16:38.303 "product_name": "Malloc disk", 00:16:38.303 "block_size": 512, 00:16:38.303 "num_blocks": 65536, 00:16:38.303 "uuid": "e87977f4-641e-4a94-8249-48fdabcf54a7", 00:16:38.303 "assigned_rate_limits": { 00:16:38.303 "rw_ios_per_sec": 0, 00:16:38.303 "rw_mbytes_per_sec": 0, 00:16:38.303 "r_mbytes_per_sec": 0, 00:16:38.303 "w_mbytes_per_sec": 0 00:16:38.303 }, 00:16:38.303 "claimed": true, 00:16:38.303 "claim_type": "exclusive_write", 00:16:38.303 "zoned": false, 00:16:38.303 "supported_io_types": { 00:16:38.303 "read": true, 00:16:38.303 "write": true, 00:16:38.303 "unmap": true, 00:16:38.303 "write_zeroes": true, 00:16:38.303 "flush": true, 00:16:38.303 "reset": true, 00:16:38.303 "compare": false, 00:16:38.303 "compare_and_write": false, 00:16:38.303 "abort": true, 00:16:38.303 "nvme_admin": false, 00:16:38.303 "nvme_io": false 00:16:38.303 }, 00:16:38.303 "memory_domains": [ 00:16:38.303 { 00:16:38.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.303 "dma_device_type": 2 00:16:38.303 } 00:16:38.303 ], 00:16:38.303 "driver_specific": {} 00:16:38.303 } 00:16:38.303 ] 00:16:38.303 16:32:14 -- common/autotest_common.sh@895 -- # return 0 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.303 16:32:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.563 16:32:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.563 "name": "Existed_Raid", 00:16:38.563 "uuid": "9b702f8c-35f5-43fe-b464-c2804e0ddce5", 00:16:38.563 "strip_size_kb": 64, 00:16:38.563 "state": "online", 00:16:38.563 "raid_level": "raid0", 00:16:38.563 "superblock": true, 00:16:38.563 "num_base_bdevs": 3, 00:16:38.563 "num_base_bdevs_discovered": 3, 00:16:38.563 "num_base_bdevs_operational": 3, 00:16:38.563 "base_bdevs_list": [ 00:16:38.563 { 00:16:38.563 "name": "BaseBdev1", 00:16:38.563 "uuid": "21b2ddfd-44f5-41b1-8c82-648015331060", 00:16:38.563 "is_configured": true, 00:16:38.563 "data_offset": 2048, 00:16:38.563 "data_size": 63488 00:16:38.563 }, 00:16:38.563 { 00:16:38.563 "name": "BaseBdev2", 00:16:38.563 "uuid": "e8410154-9734-4166-9514-60b61774db23", 00:16:38.563 "is_configured": true, 00:16:38.563 "data_offset": 2048, 00:16:38.563 "data_size": 63488 00:16:38.563 }, 00:16:38.563 { 00:16:38.563 "name": "BaseBdev3", 00:16:38.563 "uuid": "e87977f4-641e-4a94-8249-48fdabcf54a7", 00:16:38.563 "is_configured": true, 00:16:38.563 "data_offset": 2048, 00:16:38.563 "data_size": 63488 00:16:38.563 } 00:16:38.563 ] 00:16:38.563 }' 00:16:38.563 16:32:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.563 16:32:15 -- common/autotest_common.sh@10 -- # set +x 00:16:39.129 16:32:15 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:39.387 [2024-07-11 16:32:15.965686] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:39.387 [2024-07-11 16:32:15.965840] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.387 [2024-07-11 16:32:15.966024] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.387 16:32:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:39.645 16:32:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:39.645 "name": "Existed_Raid", 00:16:39.645 "uuid": "9b702f8c-35f5-43fe-b464-c2804e0ddce5", 00:16:39.645 "strip_size_kb": 64, 00:16:39.645 "state": "offline", 00:16:39.645 "raid_level": "raid0", 00:16:39.645 "superblock": true, 00:16:39.645 "num_base_bdevs": 3, 00:16:39.645 "num_base_bdevs_discovered": 2, 00:16:39.645 "num_base_bdevs_operational": 2, 00:16:39.645 "base_bdevs_list": [ 00:16:39.645 { 00:16:39.645 "name": null, 00:16:39.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.645 "is_configured": false, 00:16:39.645 "data_offset": 2048, 00:16:39.645 "data_size": 63488 00:16:39.645 }, 00:16:39.645 { 00:16:39.645 "name": "BaseBdev2", 00:16:39.645 "uuid": "e8410154-9734-4166-9514-60b61774db23", 00:16:39.645 "is_configured": true, 00:16:39.645 "data_offset": 2048, 00:16:39.645 "data_size": 63488 00:16:39.645 }, 00:16:39.645 { 00:16:39.645 "name": "BaseBdev3", 00:16:39.645 "uuid": "e87977f4-641e-4a94-8249-48fdabcf54a7", 00:16:39.645 "is_configured": true, 00:16:39.645 "data_offset": 2048, 00:16:39.645 "data_size": 63488 00:16:39.645 } 00:16:39.645 ] 00:16:39.645 }' 00:16:39.645 16:32:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:39.645 16:32:16 -- common/autotest_common.sh@10 -- # set +x 00:16:40.211 16:32:16 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:40.211 16:32:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:40.211 16:32:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.211 16:32:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:40.470 16:32:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:40.470 16:32:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.470 16:32:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:40.470 [2024-07-11 16:32:17.257371] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:40.729 16:32:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:40.729 16:32:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:40.729 16:32:17 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.729 16:32:17 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:40.729 16:32:17 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:40.729 16:32:17 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:40.729 16:32:17 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:40.988 [2024-07-11 16:32:17.686326] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:40.988 [2024-07-11 16:32:17.686501] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:40.988 16:32:17 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:40.988 16:32:17 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:40.988 16:32:17 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:40.988 16:32:17 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.247 16:32:18 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:41.247 16:32:18 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:41.247 16:32:18 -- bdev/bdev_raid.sh@287 -- # killprocess 118073 00:16:41.247 16:32:18 -- common/autotest_common.sh@926 -- # '[' -z 118073 ']' 00:16:41.247 16:32:18 -- common/autotest_common.sh@930 -- # kill -0 118073 00:16:41.247 16:32:18 -- common/autotest_common.sh@931 -- # uname 00:16:41.247 16:32:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:41.247 16:32:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118073 00:16:41.247 killing process with pid 118073 00:16:41.247 16:32:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:41.247 16:32:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:41.247 16:32:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118073' 00:16:41.247 16:32:18 -- common/autotest_common.sh@945 -- # kill 118073 00:16:41.247 16:32:18 -- common/autotest_common.sh@950 -- # wait 118073 00:16:41.247 [2024-07-11 16:32:18.030335] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:41.247 [2024-07-11 16:32:18.030446] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.183 ************************************ 00:16:42.183 END TEST raid_state_function_test_sb 00:16:42.183 ************************************ 00:16:42.183 16:32:18 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:42.183 00:16:42.183 real 0m12.417s 00:16:42.183 user 0m22.261s 00:16:42.183 sys 0m1.225s 00:16:42.183 16:32:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:42.183 16:32:18 -- common/autotest_common.sh@10 -- # set +x 00:16:42.183 16:32:18 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:16:42.184 16:32:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:42.184 16:32:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:42.184 16:32:18 -- common/autotest_common.sh@10 -- # set +x 00:16:42.184 ************************************ 00:16:42.184 START TEST raid_superblock_test 00:16:42.184 ************************************ 00:16:42.184 16:32:18 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:42.184 16:32:18 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:42.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:42.442 16:32:18 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:42.442 16:32:18 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:16:42.442 16:32:18 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:42.442 16:32:18 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:42.442 16:32:18 -- bdev/bdev_raid.sh@357 -- # raid_pid=118484 00:16:42.442 16:32:18 -- bdev/bdev_raid.sh@358 -- # waitforlisten 118484 /var/tmp/spdk-raid.sock 00:16:42.442 16:32:18 -- common/autotest_common.sh@819 -- # '[' -z 118484 ']' 00:16:42.442 16:32:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:42.442 16:32:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:42.442 16:32:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:42.442 16:32:18 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:42.442 16:32:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:42.442 16:32:18 -- common/autotest_common.sh@10 -- # set +x 00:16:42.442 [2024-07-11 16:32:19.052053] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:42.442 [2024-07-11 16:32:19.052387] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118484 ] 00:16:42.442 [2024-07-11 16:32:19.215715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.701 [2024-07-11 16:32:19.374063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.959 [2024-07-11 16:32:19.536107] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.217 16:32:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:43.217 16:32:19 -- common/autotest_common.sh@852 -- # return 0 00:16:43.217 16:32:19 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:43.217 16:32:19 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:43.217 16:32:19 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:43.217 16:32:19 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:43.217 16:32:19 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:43.217 16:32:19 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.217 16:32:19 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.217 16:32:19 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.217 16:32:19 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:43.475 malloc1 00:16:43.475 16:32:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:43.733 [2024-07-11 16:32:20.405967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:43.733 [2024-07-11 16:32:20.406183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.733 [2024-07-11 16:32:20.406246] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:43.733 [2024-07-11 16:32:20.406481] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.733 [2024-07-11 16:32:20.408405] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.733 [2024-07-11 16:32:20.408580] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:43.733 pt1 00:16:43.733 16:32:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:43.733 16:32:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:43.733 16:32:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:43.733 16:32:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:43.733 16:32:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:43.733 16:32:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:43.733 16:32:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:43.733 16:32:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:43.733 16:32:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:43.991 malloc2 00:16:43.991 16:32:20 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:44.250 [2024-07-11 16:32:20.867290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:44.250 [2024-07-11 16:32:20.867492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.250 [2024-07-11 16:32:20.867564] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:44.250 [2024-07-11 16:32:20.867827] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.250 [2024-07-11 16:32:20.869796] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.250 [2024-07-11 16:32:20.869971] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:44.250 pt2 00:16:44.250 16:32:20 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:44.250 16:32:20 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:44.250 16:32:20 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:44.250 16:32:20 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:44.250 16:32:20 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:44.250 16:32:20 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:44.250 16:32:20 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:44.250 16:32:20 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:44.250 16:32:20 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:44.508 malloc3 00:16:44.508 16:32:21 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:44.767 [2024-07-11 16:32:21.327932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:44.767 [2024-07-11 16:32:21.328138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.767 [2024-07-11 16:32:21.328209] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:44.767 [2024-07-11 16:32:21.328471] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.767 [2024-07-11 16:32:21.330541] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.767 [2024-07-11 16:32:21.330719] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:44.767 pt3 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:44.767 [2024-07-11 16:32:21.503964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:44.767 [2024-07-11 16:32:21.505615] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:44.767 [2024-07-11 16:32:21.505795] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:44.767 [2024-07-11 16:32:21.506000] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:44.767 [2024-07-11 16:32:21.506100] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:44.767 [2024-07-11 16:32:21.506253] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:44.767 [2024-07-11 16:32:21.506597] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:44.767 [2024-07-11 16:32:21.506699] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:44.767 [2024-07-11 16:32:21.506908] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.767 16:32:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.025 16:32:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.025 "name": "raid_bdev1", 00:16:45.025 "uuid": "2a505b09-19a0-4677-9efc-2a90214b52ae", 00:16:45.025 "strip_size_kb": 64, 00:16:45.025 "state": "online", 00:16:45.025 "raid_level": "raid0", 00:16:45.025 "superblock": true, 00:16:45.025 "num_base_bdevs": 3, 00:16:45.025 "num_base_bdevs_discovered": 3, 00:16:45.025 "num_base_bdevs_operational": 3, 00:16:45.025 "base_bdevs_list": [ 00:16:45.025 { 00:16:45.025 "name": "pt1", 00:16:45.025 "uuid": "af3083af-8931-541f-89e1-3c4edfa9646f", 00:16:45.025 "is_configured": true, 00:16:45.026 "data_offset": 2048, 00:16:45.026 "data_size": 63488 00:16:45.026 }, 00:16:45.026 { 00:16:45.026 "name": "pt2", 00:16:45.026 "uuid": "6e6a6991-6173-5fa7-83b2-ea6b86ad7399", 00:16:45.026 "is_configured": true, 00:16:45.026 "data_offset": 2048, 00:16:45.026 "data_size": 63488 00:16:45.026 }, 00:16:45.026 { 00:16:45.026 "name": "pt3", 00:16:45.026 "uuid": "f7e34009-f306-5696-9f9f-5ff17723d218", 00:16:45.026 "is_configured": true, 00:16:45.026 "data_offset": 2048, 00:16:45.026 "data_size": 63488 00:16:45.026 } 00:16:45.026 ] 00:16:45.026 }' 00:16:45.026 16:32:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.026 16:32:21 -- common/autotest_common.sh@10 -- # set +x 00:16:45.592 16:32:22 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:45.592 16:32:22 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:45.851 [2024-07-11 16:32:22.548296] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.851 16:32:22 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=2a505b09-19a0-4677-9efc-2a90214b52ae 00:16:45.851 16:32:22 -- bdev/bdev_raid.sh@380 -- # '[' -z 2a505b09-19a0-4677-9efc-2a90214b52ae ']' 00:16:45.851 16:32:22 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:46.109 [2024-07-11 16:32:22.728130] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:46.109 [2024-07-11 16:32:22.728284] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:46.109 [2024-07-11 16:32:22.728472] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.109 [2024-07-11 16:32:22.728613] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.109 [2024-07-11 16:32:22.728715] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:46.109 16:32:22 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.109 16:32:22 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:46.367 16:32:22 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:46.367 16:32:22 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:46.367 16:32:22 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:46.367 16:32:22 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:46.367 16:32:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:46.367 16:32:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:46.624 16:32:23 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:46.624 16:32:23 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:46.882 16:32:23 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:46.882 16:32:23 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:46.882 16:32:23 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:46.882 16:32:23 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:46.882 16:32:23 -- common/autotest_common.sh@640 -- # local es=0 00:16:46.882 16:32:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:46.882 16:32:23 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:46.882 16:32:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:46.882 16:32:23 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.140 16:32:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:47.140 16:32:23 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.140 16:32:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:47.140 16:32:23 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:47.140 16:32:23 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:47.140 16:32:23 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:47.140 [2024-07-11 16:32:23.924333] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:47.140 [2024-07-11 16:32:23.925984] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:47.140 [2024-07-11 16:32:23.926145] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:47.140 [2024-07-11 16:32:23.926236] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:47.140 [2024-07-11 16:32:23.926529] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:47.140 [2024-07-11 16:32:23.926705] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:47.140 [2024-07-11 16:32:23.926873] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.140 [2024-07-11 16:32:23.926969] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:16:47.140 request: 00:16:47.140 { 00:16:47.140 "name": "raid_bdev1", 00:16:47.140 "raid_level": "raid0", 00:16:47.140 "base_bdevs": [ 00:16:47.140 "malloc1", 00:16:47.140 "malloc2", 00:16:47.140 "malloc3" 00:16:47.140 ], 00:16:47.140 "superblock": false, 00:16:47.140 "strip_size_kb": 64, 00:16:47.140 "method": "bdev_raid_create", 00:16:47.140 "req_id": 1 00:16:47.140 } 00:16:47.140 Got JSON-RPC error response 00:16:47.140 response: 00:16:47.140 { 00:16:47.140 "code": -17, 00:16:47.140 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:47.140 } 00:16:47.140 16:32:23 -- common/autotest_common.sh@643 -- # es=1 00:16:47.140 16:32:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:47.140 16:32:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:47.140 16:32:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:47.140 16:32:23 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.140 16:32:23 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:47.398 16:32:24 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:47.398 16:32:24 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:47.398 16:32:24 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:47.656 [2024-07-11 16:32:24.408368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:47.656 [2024-07-11 16:32:24.408551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.656 [2024-07-11 16:32:24.408617] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:47.656 [2024-07-11 16:32:24.408845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.656 [2024-07-11 16:32:24.410707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.656 [2024-07-11 16:32:24.410880] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:47.656 [2024-07-11 16:32:24.411084] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:47.656 [2024-07-11 16:32:24.411222] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:47.656 pt1 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.656 16:32:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.914 16:32:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:47.914 "name": "raid_bdev1", 00:16:47.914 "uuid": "2a505b09-19a0-4677-9efc-2a90214b52ae", 00:16:47.914 "strip_size_kb": 64, 00:16:47.914 "state": "configuring", 00:16:47.914 "raid_level": "raid0", 00:16:47.914 "superblock": true, 00:16:47.914 "num_base_bdevs": 3, 00:16:47.914 "num_base_bdevs_discovered": 1, 00:16:47.914 "num_base_bdevs_operational": 3, 00:16:47.914 "base_bdevs_list": [ 00:16:47.914 { 00:16:47.914 "name": "pt1", 00:16:47.914 "uuid": "af3083af-8931-541f-89e1-3c4edfa9646f", 00:16:47.914 "is_configured": true, 00:16:47.914 "data_offset": 2048, 00:16:47.914 "data_size": 63488 00:16:47.914 }, 00:16:47.914 { 00:16:47.914 "name": null, 00:16:47.914 "uuid": "6e6a6991-6173-5fa7-83b2-ea6b86ad7399", 00:16:47.914 "is_configured": false, 00:16:47.914 "data_offset": 2048, 00:16:47.914 "data_size": 63488 00:16:47.914 }, 00:16:47.914 { 00:16:47.914 "name": null, 00:16:47.914 "uuid": "f7e34009-f306-5696-9f9f-5ff17723d218", 00:16:47.914 "is_configured": false, 00:16:47.914 "data_offset": 2048, 00:16:47.914 "data_size": 63488 00:16:47.914 } 00:16:47.914 ] 00:16:47.914 }' 00:16:47.914 16:32:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:47.914 16:32:24 -- common/autotest_common.sh@10 -- # set +x 00:16:48.480 16:32:25 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:48.480 16:32:25 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.739 [2024-07-11 16:32:25.380599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.739 [2024-07-11 16:32:25.380818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.739 [2024-07-11 16:32:25.380895] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:48.739 [2024-07-11 16:32:25.381098] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.739 [2024-07-11 16:32:25.381655] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.739 [2024-07-11 16:32:25.381804] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.739 [2024-07-11 16:32:25.382011] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:48.739 [2024-07-11 16:32:25.382135] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.739 pt2 00:16:48.739 16:32:25 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:48.998 [2024-07-11 16:32:25.560622] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.998 16:32:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:48.998 "name": "raid_bdev1", 00:16:48.998 "uuid": "2a505b09-19a0-4677-9efc-2a90214b52ae", 00:16:48.998 "strip_size_kb": 64, 00:16:48.998 "state": "configuring", 00:16:48.998 "raid_level": "raid0", 00:16:48.998 "superblock": true, 00:16:48.998 "num_base_bdevs": 3, 00:16:48.998 "num_base_bdevs_discovered": 1, 00:16:48.998 "num_base_bdevs_operational": 3, 00:16:48.998 "base_bdevs_list": [ 00:16:48.998 { 00:16:48.998 "name": "pt1", 00:16:48.998 "uuid": "af3083af-8931-541f-89e1-3c4edfa9646f", 00:16:48.998 "is_configured": true, 00:16:48.998 "data_offset": 2048, 00:16:48.998 "data_size": 63488 00:16:48.998 }, 00:16:48.999 { 00:16:48.999 "name": null, 00:16:48.999 "uuid": "6e6a6991-6173-5fa7-83b2-ea6b86ad7399", 00:16:48.999 "is_configured": false, 00:16:48.999 "data_offset": 2048, 00:16:48.999 "data_size": 63488 00:16:48.999 }, 00:16:48.999 { 00:16:48.999 "name": null, 00:16:48.999 "uuid": "f7e34009-f306-5696-9f9f-5ff17723d218", 00:16:48.999 "is_configured": false, 00:16:48.999 "data_offset": 2048, 00:16:48.999 "data_size": 63488 00:16:48.999 } 00:16:48.999 ] 00:16:48.999 }' 00:16:48.999 16:32:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:48.999 16:32:25 -- common/autotest_common.sh@10 -- # set +x 00:16:49.565 16:32:26 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:49.565 16:32:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:49.565 16:32:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.823 [2024-07-11 16:32:26.620779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.823 [2024-07-11 16:32:26.620984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.823 [2024-07-11 16:32:26.621049] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:49.823 [2024-07-11 16:32:26.621283] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.823 [2024-07-11 16:32:26.621779] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.823 [2024-07-11 16:32:26.621927] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.823 [2024-07-11 16:32:26.622130] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:49.823 [2024-07-11 16:32:26.622253] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.823 pt2 00:16:50.082 16:32:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:50.082 16:32:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:50.082 16:32:26 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:50.082 [2024-07-11 16:32:26.808810] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:50.082 [2024-07-11 16:32:26.809000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.082 [2024-07-11 16:32:26.809060] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:50.082 [2024-07-11 16:32:26.809335] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.082 [2024-07-11 16:32:26.809725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.083 [2024-07-11 16:32:26.809859] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:50.083 [2024-07-11 16:32:26.810044] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:50.083 [2024-07-11 16:32:26.810146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:50.083 [2024-07-11 16:32:26.810288] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:16:50.083 [2024-07-11 16:32:26.810398] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:50.083 [2024-07-11 16:32:26.810577] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:50.083 [2024-07-11 16:32:26.810972] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:16:50.083 [2024-07-11 16:32:26.811082] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:16:50.083 [2024-07-11 16:32:26.811291] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.083 pt3 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.083 16:32:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.341 16:32:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.341 "name": "raid_bdev1", 00:16:50.341 "uuid": "2a505b09-19a0-4677-9efc-2a90214b52ae", 00:16:50.341 "strip_size_kb": 64, 00:16:50.341 "state": "online", 00:16:50.341 "raid_level": "raid0", 00:16:50.341 "superblock": true, 00:16:50.341 "num_base_bdevs": 3, 00:16:50.341 "num_base_bdevs_discovered": 3, 00:16:50.341 "num_base_bdevs_operational": 3, 00:16:50.341 "base_bdevs_list": [ 00:16:50.341 { 00:16:50.341 "name": "pt1", 00:16:50.341 "uuid": "af3083af-8931-541f-89e1-3c4edfa9646f", 00:16:50.341 "is_configured": true, 00:16:50.341 "data_offset": 2048, 00:16:50.341 "data_size": 63488 00:16:50.341 }, 00:16:50.341 { 00:16:50.341 "name": "pt2", 00:16:50.341 "uuid": "6e6a6991-6173-5fa7-83b2-ea6b86ad7399", 00:16:50.341 "is_configured": true, 00:16:50.341 "data_offset": 2048, 00:16:50.341 "data_size": 63488 00:16:50.341 }, 00:16:50.341 { 00:16:50.341 "name": "pt3", 00:16:50.341 "uuid": "f7e34009-f306-5696-9f9f-5ff17723d218", 00:16:50.341 "is_configured": true, 00:16:50.341 "data_offset": 2048, 00:16:50.342 "data_size": 63488 00:16:50.342 } 00:16:50.342 ] 00:16:50.342 }' 00:16:50.342 16:32:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.342 16:32:27 -- common/autotest_common.sh@10 -- # set +x 00:16:50.909 16:32:27 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:50.909 16:32:27 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:51.168 [2024-07-11 16:32:27.821167] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.168 16:32:27 -- bdev/bdev_raid.sh@430 -- # '[' 2a505b09-19a0-4677-9efc-2a90214b52ae '!=' 2a505b09-19a0-4677-9efc-2a90214b52ae ']' 00:16:51.168 16:32:27 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:16:51.168 16:32:27 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:51.168 16:32:27 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:51.168 16:32:27 -- bdev/bdev_raid.sh@511 -- # killprocess 118484 00:16:51.168 16:32:27 -- common/autotest_common.sh@926 -- # '[' -z 118484 ']' 00:16:51.168 16:32:27 -- common/autotest_common.sh@930 -- # kill -0 118484 00:16:51.168 16:32:27 -- common/autotest_common.sh@931 -- # uname 00:16:51.168 16:32:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:51.168 16:32:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118484 00:16:51.168 16:32:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:51.168 16:32:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:51.168 16:32:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118484' 00:16:51.168 killing process with pid 118484 00:16:51.168 16:32:27 -- common/autotest_common.sh@945 -- # kill 118484 00:16:51.168 16:32:27 -- common/autotest_common.sh@950 -- # wait 118484 00:16:51.168 [2024-07-11 16:32:27.859191] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.168 [2024-07-11 16:32:27.859251] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.168 [2024-07-11 16:32:27.859420] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.168 [2024-07-11 16:32:27.859556] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:16:51.426 [2024-07-11 16:32:28.052168] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.380 ************************************ 00:16:52.380 END TEST raid_superblock_test 00:16:52.380 ************************************ 00:16:52.380 16:32:28 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:52.380 00:16:52.380 real 0m9.962s 00:16:52.380 user 0m17.591s 00:16:52.380 sys 0m1.067s 00:16:52.380 16:32:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:52.380 16:32:28 -- common/autotest_common.sh@10 -- # set +x 00:16:52.380 16:32:28 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:52.380 16:32:28 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:52.380 16:32:28 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:16:52.380 16:32:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:52.380 16:32:28 -- common/autotest_common.sh@10 -- # set +x 00:16:52.380 ************************************ 00:16:52.381 START TEST raid_state_function_test 00:16:52.381 ************************************ 00:16:52.381 16:32:28 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:52.381 16:32:28 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:52.381 Process raid pid: 118793 00:16:52.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@226 -- # raid_pid=118793 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118793' 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118793 /var/tmp/spdk-raid.sock 00:16:52.381 16:32:29 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:52.381 16:32:29 -- common/autotest_common.sh@819 -- # '[' -z 118793 ']' 00:16:52.381 16:32:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:52.381 16:32:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:52.381 16:32:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:52.381 16:32:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:52.381 16:32:29 -- common/autotest_common.sh@10 -- # set +x 00:16:52.381 [2024-07-11 16:32:29.065460] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:52.381 [2024-07-11 16:32:29.065800] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.640 [2024-07-11 16:32:29.233558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.640 [2024-07-11 16:32:29.399434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.899 [2024-07-11 16:32:29.564338] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.465 16:32:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:53.465 16:32:30 -- common/autotest_common.sh@852 -- # return 0 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:53.465 [2024-07-11 16:32:30.254018] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.465 [2024-07-11 16:32:30.254218] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.465 [2024-07-11 16:32:30.254337] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.465 [2024-07-11 16:32:30.254395] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.465 [2024-07-11 16:32:30.254481] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.465 [2024-07-11 16:32:30.254643] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.465 16:32:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.724 16:32:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:53.724 "name": "Existed_Raid", 00:16:53.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.724 "strip_size_kb": 64, 00:16:53.724 "state": "configuring", 00:16:53.724 "raid_level": "concat", 00:16:53.724 "superblock": false, 00:16:53.724 "num_base_bdevs": 3, 00:16:53.724 "num_base_bdevs_discovered": 0, 00:16:53.724 "num_base_bdevs_operational": 3, 00:16:53.724 "base_bdevs_list": [ 00:16:53.724 { 00:16:53.724 "name": "BaseBdev1", 00:16:53.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.724 "is_configured": false, 00:16:53.724 "data_offset": 0, 00:16:53.724 "data_size": 0 00:16:53.724 }, 00:16:53.724 { 00:16:53.724 "name": "BaseBdev2", 00:16:53.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.724 "is_configured": false, 00:16:53.724 "data_offset": 0, 00:16:53.724 "data_size": 0 00:16:53.724 }, 00:16:53.724 { 00:16:53.724 "name": "BaseBdev3", 00:16:53.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.724 "is_configured": false, 00:16:53.724 "data_offset": 0, 00:16:53.724 "data_size": 0 00:16:53.724 } 00:16:53.724 ] 00:16:53.724 }' 00:16:53.724 16:32:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:53.724 16:32:30 -- common/autotest_common.sh@10 -- # set +x 00:16:54.660 16:32:31 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:54.661 [2024-07-11 16:32:31.378115] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.661 [2024-07-11 16:32:31.378251] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:54.661 16:32:31 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:54.920 [2024-07-11 16:32:31.618198] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.920 [2024-07-11 16:32:31.618395] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.920 [2024-07-11 16:32:31.618490] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.920 [2024-07-11 16:32:31.618542] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.920 [2024-07-11 16:32:31.618662] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.920 [2024-07-11 16:32:31.618726] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.920 16:32:31 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:55.179 [2024-07-11 16:32:31.907244] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.179 BaseBdev1 00:16:55.179 16:32:31 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:55.179 16:32:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:16:55.179 16:32:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:55.179 16:32:31 -- common/autotest_common.sh@889 -- # local i 00:16:55.179 16:32:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:55.179 16:32:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:55.179 16:32:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:55.438 16:32:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:55.695 [ 00:16:55.695 { 00:16:55.695 "name": "BaseBdev1", 00:16:55.695 "aliases": [ 00:16:55.695 "428c5554-e412-43c0-9fa5-d80330d01754" 00:16:55.695 ], 00:16:55.695 "product_name": "Malloc disk", 00:16:55.695 "block_size": 512, 00:16:55.695 "num_blocks": 65536, 00:16:55.695 "uuid": "428c5554-e412-43c0-9fa5-d80330d01754", 00:16:55.695 "assigned_rate_limits": { 00:16:55.695 "rw_ios_per_sec": 0, 00:16:55.695 "rw_mbytes_per_sec": 0, 00:16:55.695 "r_mbytes_per_sec": 0, 00:16:55.695 "w_mbytes_per_sec": 0 00:16:55.695 }, 00:16:55.695 "claimed": true, 00:16:55.695 "claim_type": "exclusive_write", 00:16:55.695 "zoned": false, 00:16:55.695 "supported_io_types": { 00:16:55.695 "read": true, 00:16:55.695 "write": true, 00:16:55.695 "unmap": true, 00:16:55.695 "write_zeroes": true, 00:16:55.695 "flush": true, 00:16:55.695 "reset": true, 00:16:55.695 "compare": false, 00:16:55.695 "compare_and_write": false, 00:16:55.695 "abort": true, 00:16:55.696 "nvme_admin": false, 00:16:55.696 "nvme_io": false 00:16:55.696 }, 00:16:55.696 "memory_domains": [ 00:16:55.696 { 00:16:55.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.696 "dma_device_type": 2 00:16:55.696 } 00:16:55.696 ], 00:16:55.696 "driver_specific": {} 00:16:55.696 } 00:16:55.696 ] 00:16:55.696 16:32:32 -- common/autotest_common.sh@895 -- # return 0 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.696 16:32:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.953 16:32:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.953 "name": "Existed_Raid", 00:16:55.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.953 "strip_size_kb": 64, 00:16:55.953 "state": "configuring", 00:16:55.953 "raid_level": "concat", 00:16:55.953 "superblock": false, 00:16:55.953 "num_base_bdevs": 3, 00:16:55.953 "num_base_bdevs_discovered": 1, 00:16:55.953 "num_base_bdevs_operational": 3, 00:16:55.953 "base_bdevs_list": [ 00:16:55.953 { 00:16:55.953 "name": "BaseBdev1", 00:16:55.953 "uuid": "428c5554-e412-43c0-9fa5-d80330d01754", 00:16:55.953 "is_configured": true, 00:16:55.953 "data_offset": 0, 00:16:55.953 "data_size": 65536 00:16:55.953 }, 00:16:55.953 { 00:16:55.953 "name": "BaseBdev2", 00:16:55.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.953 "is_configured": false, 00:16:55.953 "data_offset": 0, 00:16:55.953 "data_size": 0 00:16:55.953 }, 00:16:55.953 { 00:16:55.953 "name": "BaseBdev3", 00:16:55.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.953 "is_configured": false, 00:16:55.953 "data_offset": 0, 00:16:55.953 "data_size": 0 00:16:55.953 } 00:16:55.953 ] 00:16:55.953 }' 00:16:55.953 16:32:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.953 16:32:32 -- common/autotest_common.sh@10 -- # set +x 00:16:56.520 16:32:33 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:56.778 [2024-07-11 16:32:33.411554] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.778 [2024-07-11 16:32:33.411712] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:56.778 16:32:33 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:56.778 16:32:33 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:57.036 [2024-07-11 16:32:33.607634] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.036 [2024-07-11 16:32:33.609254] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:57.036 [2024-07-11 16:32:33.609430] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:57.036 [2024-07-11 16:32:33.609527] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:57.036 [2024-07-11 16:32:33.609645] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.036 "name": "Existed_Raid", 00:16:57.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.036 "strip_size_kb": 64, 00:16:57.036 "state": "configuring", 00:16:57.036 "raid_level": "concat", 00:16:57.036 "superblock": false, 00:16:57.036 "num_base_bdevs": 3, 00:16:57.036 "num_base_bdevs_discovered": 1, 00:16:57.036 "num_base_bdevs_operational": 3, 00:16:57.036 "base_bdevs_list": [ 00:16:57.036 { 00:16:57.036 "name": "BaseBdev1", 00:16:57.036 "uuid": "428c5554-e412-43c0-9fa5-d80330d01754", 00:16:57.036 "is_configured": true, 00:16:57.036 "data_offset": 0, 00:16:57.036 "data_size": 65536 00:16:57.036 }, 00:16:57.036 { 00:16:57.036 "name": "BaseBdev2", 00:16:57.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.036 "is_configured": false, 00:16:57.036 "data_offset": 0, 00:16:57.036 "data_size": 0 00:16:57.036 }, 00:16:57.036 { 00:16:57.036 "name": "BaseBdev3", 00:16:57.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.036 "is_configured": false, 00:16:57.036 "data_offset": 0, 00:16:57.036 "data_size": 0 00:16:57.036 } 00:16:57.036 ] 00:16:57.036 }' 00:16:57.036 16:32:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.036 16:32:33 -- common/autotest_common.sh@10 -- # set +x 00:16:57.970 16:32:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:58.229 [2024-07-11 16:32:34.801448] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.229 BaseBdev2 00:16:58.229 16:32:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:58.229 16:32:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:16:58.229 16:32:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:58.229 16:32:34 -- common/autotest_common.sh@889 -- # local i 00:16:58.229 16:32:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:58.229 16:32:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:58.229 16:32:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:58.229 16:32:35 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:58.487 [ 00:16:58.487 { 00:16:58.487 "name": "BaseBdev2", 00:16:58.487 "aliases": [ 00:16:58.487 "cf3a5654-5eca-4365-bec0-e12e102a23db" 00:16:58.487 ], 00:16:58.487 "product_name": "Malloc disk", 00:16:58.487 "block_size": 512, 00:16:58.487 "num_blocks": 65536, 00:16:58.487 "uuid": "cf3a5654-5eca-4365-bec0-e12e102a23db", 00:16:58.487 "assigned_rate_limits": { 00:16:58.487 "rw_ios_per_sec": 0, 00:16:58.487 "rw_mbytes_per_sec": 0, 00:16:58.487 "r_mbytes_per_sec": 0, 00:16:58.487 "w_mbytes_per_sec": 0 00:16:58.487 }, 00:16:58.487 "claimed": true, 00:16:58.487 "claim_type": "exclusive_write", 00:16:58.487 "zoned": false, 00:16:58.487 "supported_io_types": { 00:16:58.487 "read": true, 00:16:58.487 "write": true, 00:16:58.488 "unmap": true, 00:16:58.488 "write_zeroes": true, 00:16:58.488 "flush": true, 00:16:58.488 "reset": true, 00:16:58.488 "compare": false, 00:16:58.488 "compare_and_write": false, 00:16:58.488 "abort": true, 00:16:58.488 "nvme_admin": false, 00:16:58.488 "nvme_io": false 00:16:58.488 }, 00:16:58.488 "memory_domains": [ 00:16:58.488 { 00:16:58.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.488 "dma_device_type": 2 00:16:58.488 } 00:16:58.488 ], 00:16:58.488 "driver_specific": {} 00:16:58.488 } 00:16:58.488 ] 00:16:58.488 16:32:35 -- common/autotest_common.sh@895 -- # return 0 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.488 16:32:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.748 16:32:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.748 "name": "Existed_Raid", 00:16:58.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.748 "strip_size_kb": 64, 00:16:58.748 "state": "configuring", 00:16:58.748 "raid_level": "concat", 00:16:58.748 "superblock": false, 00:16:58.748 "num_base_bdevs": 3, 00:16:58.748 "num_base_bdevs_discovered": 2, 00:16:58.748 "num_base_bdevs_operational": 3, 00:16:58.748 "base_bdevs_list": [ 00:16:58.748 { 00:16:58.748 "name": "BaseBdev1", 00:16:58.748 "uuid": "428c5554-e412-43c0-9fa5-d80330d01754", 00:16:58.748 "is_configured": true, 00:16:58.748 "data_offset": 0, 00:16:58.748 "data_size": 65536 00:16:58.748 }, 00:16:58.748 { 00:16:58.748 "name": "BaseBdev2", 00:16:58.748 "uuid": "cf3a5654-5eca-4365-bec0-e12e102a23db", 00:16:58.748 "is_configured": true, 00:16:58.748 "data_offset": 0, 00:16:58.748 "data_size": 65536 00:16:58.748 }, 00:16:58.748 { 00:16:58.748 "name": "BaseBdev3", 00:16:58.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.748 "is_configured": false, 00:16:58.748 "data_offset": 0, 00:16:58.748 "data_size": 0 00:16:58.748 } 00:16:58.748 ] 00:16:58.748 }' 00:16:58.748 16:32:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.748 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:16:59.685 16:32:36 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:59.685 [2024-07-11 16:32:36.473384] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.685 [2024-07-11 16:32:36.473592] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:59.685 [2024-07-11 16:32:36.473630] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:59.685 [2024-07-11 16:32:36.473833] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:59.685 [2024-07-11 16:32:36.474271] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:59.685 [2024-07-11 16:32:36.474393] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:59.685 [2024-07-11 16:32:36.474733] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.685 BaseBdev3 00:16:59.685 16:32:36 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:59.685 16:32:36 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:16:59.685 16:32:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:59.685 16:32:36 -- common/autotest_common.sh@889 -- # local i 00:16:59.685 16:32:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:59.685 16:32:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:59.685 16:32:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:59.943 16:32:36 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:00.201 [ 00:17:00.201 { 00:17:00.201 "name": "BaseBdev3", 00:17:00.201 "aliases": [ 00:17:00.201 "4b141673-e3a3-47e3-8a81-1f31852403a3" 00:17:00.201 ], 00:17:00.201 "product_name": "Malloc disk", 00:17:00.201 "block_size": 512, 00:17:00.201 "num_blocks": 65536, 00:17:00.201 "uuid": "4b141673-e3a3-47e3-8a81-1f31852403a3", 00:17:00.201 "assigned_rate_limits": { 00:17:00.201 "rw_ios_per_sec": 0, 00:17:00.201 "rw_mbytes_per_sec": 0, 00:17:00.201 "r_mbytes_per_sec": 0, 00:17:00.201 "w_mbytes_per_sec": 0 00:17:00.201 }, 00:17:00.201 "claimed": true, 00:17:00.201 "claim_type": "exclusive_write", 00:17:00.201 "zoned": false, 00:17:00.201 "supported_io_types": { 00:17:00.201 "read": true, 00:17:00.201 "write": true, 00:17:00.201 "unmap": true, 00:17:00.201 "write_zeroes": true, 00:17:00.201 "flush": true, 00:17:00.201 "reset": true, 00:17:00.201 "compare": false, 00:17:00.201 "compare_and_write": false, 00:17:00.201 "abort": true, 00:17:00.201 "nvme_admin": false, 00:17:00.201 "nvme_io": false 00:17:00.201 }, 00:17:00.201 "memory_domains": [ 00:17:00.201 { 00:17:00.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.201 "dma_device_type": 2 00:17:00.201 } 00:17:00.202 ], 00:17:00.202 "driver_specific": {} 00:17:00.202 } 00:17:00.202 ] 00:17:00.202 16:32:36 -- common/autotest_common.sh@895 -- # return 0 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.202 16:32:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.460 16:32:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.460 "name": "Existed_Raid", 00:17:00.460 "uuid": "74bb32ba-2bb3-45b2-b681-c4e427867f84", 00:17:00.460 "strip_size_kb": 64, 00:17:00.460 "state": "online", 00:17:00.460 "raid_level": "concat", 00:17:00.460 "superblock": false, 00:17:00.460 "num_base_bdevs": 3, 00:17:00.460 "num_base_bdevs_discovered": 3, 00:17:00.460 "num_base_bdevs_operational": 3, 00:17:00.460 "base_bdevs_list": [ 00:17:00.460 { 00:17:00.460 "name": "BaseBdev1", 00:17:00.460 "uuid": "428c5554-e412-43c0-9fa5-d80330d01754", 00:17:00.460 "is_configured": true, 00:17:00.460 "data_offset": 0, 00:17:00.460 "data_size": 65536 00:17:00.460 }, 00:17:00.460 { 00:17:00.460 "name": "BaseBdev2", 00:17:00.460 "uuid": "cf3a5654-5eca-4365-bec0-e12e102a23db", 00:17:00.460 "is_configured": true, 00:17:00.460 "data_offset": 0, 00:17:00.460 "data_size": 65536 00:17:00.460 }, 00:17:00.460 { 00:17:00.460 "name": "BaseBdev3", 00:17:00.460 "uuid": "4b141673-e3a3-47e3-8a81-1f31852403a3", 00:17:00.460 "is_configured": true, 00:17:00.460 "data_offset": 0, 00:17:00.460 "data_size": 65536 00:17:00.460 } 00:17:00.460 ] 00:17:00.460 }' 00:17:00.460 16:32:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.460 16:32:37 -- common/autotest_common.sh@10 -- # set +x 00:17:01.026 16:32:37 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:01.284 [2024-07-11 16:32:38.005392] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:01.284 [2024-07-11 16:32:38.005529] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:01.284 [2024-07-11 16:32:38.005704] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.284 16:32:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.542 16:32:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.542 "name": "Existed_Raid", 00:17:01.542 "uuid": "74bb32ba-2bb3-45b2-b681-c4e427867f84", 00:17:01.542 "strip_size_kb": 64, 00:17:01.542 "state": "offline", 00:17:01.542 "raid_level": "concat", 00:17:01.542 "superblock": false, 00:17:01.542 "num_base_bdevs": 3, 00:17:01.542 "num_base_bdevs_discovered": 2, 00:17:01.542 "num_base_bdevs_operational": 2, 00:17:01.542 "base_bdevs_list": [ 00:17:01.542 { 00:17:01.542 "name": null, 00:17:01.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.542 "is_configured": false, 00:17:01.542 "data_offset": 0, 00:17:01.542 "data_size": 65536 00:17:01.542 }, 00:17:01.542 { 00:17:01.542 "name": "BaseBdev2", 00:17:01.542 "uuid": "cf3a5654-5eca-4365-bec0-e12e102a23db", 00:17:01.542 "is_configured": true, 00:17:01.542 "data_offset": 0, 00:17:01.542 "data_size": 65536 00:17:01.542 }, 00:17:01.542 { 00:17:01.542 "name": "BaseBdev3", 00:17:01.542 "uuid": "4b141673-e3a3-47e3-8a81-1f31852403a3", 00:17:01.542 "is_configured": true, 00:17:01.542 "data_offset": 0, 00:17:01.542 "data_size": 65536 00:17:01.542 } 00:17:01.542 ] 00:17:01.542 }' 00:17:01.542 16:32:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.542 16:32:38 -- common/autotest_common.sh@10 -- # set +x 00:17:02.478 16:32:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:02.478 16:32:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:02.478 16:32:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.478 16:32:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:02.478 16:32:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:02.478 16:32:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:02.479 16:32:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:02.737 [2024-07-11 16:32:39.347377] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:02.737 16:32:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:02.737 16:32:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:02.737 16:32:39 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.737 16:32:39 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:02.996 16:32:39 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:02.997 16:32:39 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:02.997 16:32:39 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:03.255 [2024-07-11 16:32:39.866760] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:03.255 [2024-07-11 16:32:39.866995] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:03.255 16:32:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:03.255 16:32:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:03.255 16:32:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.255 16:32:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:03.514 16:32:40 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:03.514 16:32:40 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:03.514 16:32:40 -- bdev/bdev_raid.sh@287 -- # killprocess 118793 00:17:03.514 16:32:40 -- common/autotest_common.sh@926 -- # '[' -z 118793 ']' 00:17:03.514 16:32:40 -- common/autotest_common.sh@930 -- # kill -0 118793 00:17:03.514 16:32:40 -- common/autotest_common.sh@931 -- # uname 00:17:03.514 16:32:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:03.514 16:32:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 118793 00:17:03.514 killing process with pid 118793 00:17:03.514 16:32:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:03.514 16:32:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:03.514 16:32:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 118793' 00:17:03.514 16:32:40 -- common/autotest_common.sh@945 -- # kill 118793 00:17:03.514 16:32:40 -- common/autotest_common.sh@950 -- # wait 118793 00:17:03.514 [2024-07-11 16:32:40.140460] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.514 [2024-07-11 16:32:40.140832] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.448 ************************************ 00:17:04.448 END TEST raid_state_function_test 00:17:04.448 ************************************ 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:04.448 00:17:04.448 real 0m12.056s 00:17:04.448 user 0m21.626s 00:17:04.448 sys 0m1.297s 00:17:04.448 16:32:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.448 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:17:04.448 16:32:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:04.448 16:32:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:04.448 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:17:04.448 ************************************ 00:17:04.448 START TEST raid_state_function_test_sb 00:17:04.448 ************************************ 00:17:04.448 16:32:41 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=119191 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119191' 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:04.448 Process raid pid: 119191 00:17:04.448 16:32:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119191 /var/tmp/spdk-raid.sock 00:17:04.448 16:32:41 -- common/autotest_common.sh@819 -- # '[' -z 119191 ']' 00:17:04.448 16:32:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:04.448 16:32:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:04.448 16:32:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:04.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:04.448 16:32:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:04.448 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:17:04.448 [2024-07-11 16:32:41.183505] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:04.448 [2024-07-11 16:32:41.183882] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.707 [2024-07-11 16:32:41.354722] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.977 [2024-07-11 16:32:41.580051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.977 [2024-07-11 16:32:41.745712] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.552 16:32:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:05.552 16:32:42 -- common/autotest_common.sh@852 -- # return 0 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:05.552 [2024-07-11 16:32:42.282520] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.552 [2024-07-11 16:32:42.282730] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.552 [2024-07-11 16:32:42.282864] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.552 [2024-07-11 16:32:42.282987] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.552 [2024-07-11 16:32:42.283078] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.552 [2024-07-11 16:32:42.283204] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.552 16:32:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.810 16:32:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.810 "name": "Existed_Raid", 00:17:05.810 "uuid": "c8a81c42-6d77-47e7-96e6-2a134267ccbe", 00:17:05.810 "strip_size_kb": 64, 00:17:05.810 "state": "configuring", 00:17:05.810 "raid_level": "concat", 00:17:05.810 "superblock": true, 00:17:05.810 "num_base_bdevs": 3, 00:17:05.810 "num_base_bdevs_discovered": 0, 00:17:05.810 "num_base_bdevs_operational": 3, 00:17:05.810 "base_bdevs_list": [ 00:17:05.810 { 00:17:05.810 "name": "BaseBdev1", 00:17:05.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.810 "is_configured": false, 00:17:05.810 "data_offset": 0, 00:17:05.810 "data_size": 0 00:17:05.810 }, 00:17:05.810 { 00:17:05.810 "name": "BaseBdev2", 00:17:05.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.810 "is_configured": false, 00:17:05.810 "data_offset": 0, 00:17:05.810 "data_size": 0 00:17:05.810 }, 00:17:05.810 { 00:17:05.810 "name": "BaseBdev3", 00:17:05.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.810 "is_configured": false, 00:17:05.810 "data_offset": 0, 00:17:05.810 "data_size": 0 00:17:05.810 } 00:17:05.810 ] 00:17:05.810 }' 00:17:05.810 16:32:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.810 16:32:42 -- common/autotest_common.sh@10 -- # set +x 00:17:06.385 16:32:43 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:06.644 [2024-07-11 16:32:43.358569] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.644 [2024-07-11 16:32:43.358706] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:06.644 16:32:43 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:06.902 [2024-07-11 16:32:43.538659] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.902 [2024-07-11 16:32:43.538832] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.902 [2024-07-11 16:32:43.538926] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.902 [2024-07-11 16:32:43.539064] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.902 [2024-07-11 16:32:43.539153] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.902 [2024-07-11 16:32:43.539215] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.902 16:32:43 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:07.160 [2024-07-11 16:32:43.799629] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.160 BaseBdev1 00:17:07.160 16:32:43 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:07.160 16:32:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:07.160 16:32:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:07.160 16:32:43 -- common/autotest_common.sh@889 -- # local i 00:17:07.160 16:32:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:07.160 16:32:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:07.160 16:32:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:07.418 16:32:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:07.418 [ 00:17:07.418 { 00:17:07.418 "name": "BaseBdev1", 00:17:07.418 "aliases": [ 00:17:07.418 "c4465790-9162-4217-90d7-80b88089e61d" 00:17:07.418 ], 00:17:07.418 "product_name": "Malloc disk", 00:17:07.418 "block_size": 512, 00:17:07.418 "num_blocks": 65536, 00:17:07.418 "uuid": "c4465790-9162-4217-90d7-80b88089e61d", 00:17:07.418 "assigned_rate_limits": { 00:17:07.418 "rw_ios_per_sec": 0, 00:17:07.418 "rw_mbytes_per_sec": 0, 00:17:07.418 "r_mbytes_per_sec": 0, 00:17:07.418 "w_mbytes_per_sec": 0 00:17:07.418 }, 00:17:07.418 "claimed": true, 00:17:07.418 "claim_type": "exclusive_write", 00:17:07.418 "zoned": false, 00:17:07.418 "supported_io_types": { 00:17:07.418 "read": true, 00:17:07.418 "write": true, 00:17:07.418 "unmap": true, 00:17:07.418 "write_zeroes": true, 00:17:07.418 "flush": true, 00:17:07.418 "reset": true, 00:17:07.418 "compare": false, 00:17:07.418 "compare_and_write": false, 00:17:07.418 "abort": true, 00:17:07.418 "nvme_admin": false, 00:17:07.418 "nvme_io": false 00:17:07.418 }, 00:17:07.418 "memory_domains": [ 00:17:07.418 { 00:17:07.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.418 "dma_device_type": 2 00:17:07.418 } 00:17:07.418 ], 00:17:07.418 "driver_specific": {} 00:17:07.418 } 00:17:07.418 ] 00:17:07.418 16:32:44 -- common/autotest_common.sh@895 -- # return 0 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.418 16:32:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.676 16:32:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.676 "name": "Existed_Raid", 00:17:07.676 "uuid": "622dd8d7-cbe0-468a-8c00-edda3c351faa", 00:17:07.676 "strip_size_kb": 64, 00:17:07.676 "state": "configuring", 00:17:07.676 "raid_level": "concat", 00:17:07.676 "superblock": true, 00:17:07.676 "num_base_bdevs": 3, 00:17:07.676 "num_base_bdevs_discovered": 1, 00:17:07.676 "num_base_bdevs_operational": 3, 00:17:07.676 "base_bdevs_list": [ 00:17:07.676 { 00:17:07.676 "name": "BaseBdev1", 00:17:07.676 "uuid": "c4465790-9162-4217-90d7-80b88089e61d", 00:17:07.676 "is_configured": true, 00:17:07.676 "data_offset": 2048, 00:17:07.676 "data_size": 63488 00:17:07.676 }, 00:17:07.676 { 00:17:07.676 "name": "BaseBdev2", 00:17:07.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.676 "is_configured": false, 00:17:07.676 "data_offset": 0, 00:17:07.676 "data_size": 0 00:17:07.676 }, 00:17:07.676 { 00:17:07.676 "name": "BaseBdev3", 00:17:07.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.676 "is_configured": false, 00:17:07.676 "data_offset": 0, 00:17:07.676 "data_size": 0 00:17:07.676 } 00:17:07.676 ] 00:17:07.676 }' 00:17:07.676 16:32:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.676 16:32:44 -- common/autotest_common.sh@10 -- # set +x 00:17:08.611 16:32:45 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:08.611 [2024-07-11 16:32:45.340000] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:08.611 [2024-07-11 16:32:45.340201] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:08.611 16:32:45 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:08.611 16:32:45 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:08.869 16:32:45 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:09.127 BaseBdev1 00:17:09.127 16:32:45 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:09.127 16:32:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:09.127 16:32:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:09.127 16:32:45 -- common/autotest_common.sh@889 -- # local i 00:17:09.127 16:32:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:09.127 16:32:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:09.127 16:32:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.385 16:32:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:09.644 [ 00:17:09.644 { 00:17:09.644 "name": "BaseBdev1", 00:17:09.644 "aliases": [ 00:17:09.644 "07927ea1-b6fb-47a5-af07-5bdbdedea0bd" 00:17:09.644 ], 00:17:09.644 "product_name": "Malloc disk", 00:17:09.644 "block_size": 512, 00:17:09.644 "num_blocks": 65536, 00:17:09.644 "uuid": "07927ea1-b6fb-47a5-af07-5bdbdedea0bd", 00:17:09.644 "assigned_rate_limits": { 00:17:09.644 "rw_ios_per_sec": 0, 00:17:09.644 "rw_mbytes_per_sec": 0, 00:17:09.644 "r_mbytes_per_sec": 0, 00:17:09.644 "w_mbytes_per_sec": 0 00:17:09.644 }, 00:17:09.644 "claimed": false, 00:17:09.644 "zoned": false, 00:17:09.644 "supported_io_types": { 00:17:09.644 "read": true, 00:17:09.644 "write": true, 00:17:09.644 "unmap": true, 00:17:09.644 "write_zeroes": true, 00:17:09.644 "flush": true, 00:17:09.644 "reset": true, 00:17:09.644 "compare": false, 00:17:09.644 "compare_and_write": false, 00:17:09.644 "abort": true, 00:17:09.644 "nvme_admin": false, 00:17:09.644 "nvme_io": false 00:17:09.644 }, 00:17:09.644 "memory_domains": [ 00:17:09.644 { 00:17:09.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.644 "dma_device_type": 2 00:17:09.644 } 00:17:09.644 ], 00:17:09.644 "driver_specific": {} 00:17:09.644 } 00:17:09.644 ] 00:17:09.644 16:32:46 -- common/autotest_common.sh@895 -- # return 0 00:17:09.644 16:32:46 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:09.903 [2024-07-11 16:32:46.542624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.903 [2024-07-11 16:32:46.544275] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.903 [2024-07-11 16:32:46.544464] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.903 [2024-07-11 16:32:46.544591] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.903 [2024-07-11 16:32:46.544704] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.903 16:32:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.161 16:32:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:10.161 "name": "Existed_Raid", 00:17:10.161 "uuid": "10a37dca-ffb5-49b7-9900-897a292d01b4", 00:17:10.161 "strip_size_kb": 64, 00:17:10.161 "state": "configuring", 00:17:10.161 "raid_level": "concat", 00:17:10.161 "superblock": true, 00:17:10.161 "num_base_bdevs": 3, 00:17:10.162 "num_base_bdevs_discovered": 1, 00:17:10.162 "num_base_bdevs_operational": 3, 00:17:10.162 "base_bdevs_list": [ 00:17:10.162 { 00:17:10.162 "name": "BaseBdev1", 00:17:10.162 "uuid": "07927ea1-b6fb-47a5-af07-5bdbdedea0bd", 00:17:10.162 "is_configured": true, 00:17:10.162 "data_offset": 2048, 00:17:10.162 "data_size": 63488 00:17:10.162 }, 00:17:10.162 { 00:17:10.162 "name": "BaseBdev2", 00:17:10.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.162 "is_configured": false, 00:17:10.162 "data_offset": 0, 00:17:10.162 "data_size": 0 00:17:10.162 }, 00:17:10.162 { 00:17:10.162 "name": "BaseBdev3", 00:17:10.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.162 "is_configured": false, 00:17:10.162 "data_offset": 0, 00:17:10.162 "data_size": 0 00:17:10.162 } 00:17:10.162 ] 00:17:10.162 }' 00:17:10.162 16:32:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:10.162 16:32:46 -- common/autotest_common.sh@10 -- # set +x 00:17:10.729 16:32:47 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:10.988 [2024-07-11 16:32:47.741313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:10.988 BaseBdev2 00:17:10.988 16:32:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:10.988 16:32:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:10.988 16:32:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:10.988 16:32:47 -- common/autotest_common.sh@889 -- # local i 00:17:10.988 16:32:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:10.988 16:32:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:10.988 16:32:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.246 16:32:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:11.505 [ 00:17:11.505 { 00:17:11.505 "name": "BaseBdev2", 00:17:11.505 "aliases": [ 00:17:11.505 "afca4f00-eda4-4195-b50a-e753b5f951a1" 00:17:11.505 ], 00:17:11.505 "product_name": "Malloc disk", 00:17:11.505 "block_size": 512, 00:17:11.505 "num_blocks": 65536, 00:17:11.505 "uuid": "afca4f00-eda4-4195-b50a-e753b5f951a1", 00:17:11.505 "assigned_rate_limits": { 00:17:11.505 "rw_ios_per_sec": 0, 00:17:11.505 "rw_mbytes_per_sec": 0, 00:17:11.505 "r_mbytes_per_sec": 0, 00:17:11.505 "w_mbytes_per_sec": 0 00:17:11.505 }, 00:17:11.505 "claimed": true, 00:17:11.505 "claim_type": "exclusive_write", 00:17:11.505 "zoned": false, 00:17:11.505 "supported_io_types": { 00:17:11.505 "read": true, 00:17:11.505 "write": true, 00:17:11.505 "unmap": true, 00:17:11.505 "write_zeroes": true, 00:17:11.505 "flush": true, 00:17:11.505 "reset": true, 00:17:11.505 "compare": false, 00:17:11.505 "compare_and_write": false, 00:17:11.505 "abort": true, 00:17:11.505 "nvme_admin": false, 00:17:11.505 "nvme_io": false 00:17:11.505 }, 00:17:11.505 "memory_domains": [ 00:17:11.505 { 00:17:11.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.505 "dma_device_type": 2 00:17:11.505 } 00:17:11.505 ], 00:17:11.505 "driver_specific": {} 00:17:11.505 } 00:17:11.505 ] 00:17:11.505 16:32:48 -- common/autotest_common.sh@895 -- # return 0 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.505 16:32:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.763 16:32:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.763 "name": "Existed_Raid", 00:17:11.763 "uuid": "10a37dca-ffb5-49b7-9900-897a292d01b4", 00:17:11.763 "strip_size_kb": 64, 00:17:11.763 "state": "configuring", 00:17:11.763 "raid_level": "concat", 00:17:11.763 "superblock": true, 00:17:11.763 "num_base_bdevs": 3, 00:17:11.763 "num_base_bdevs_discovered": 2, 00:17:11.763 "num_base_bdevs_operational": 3, 00:17:11.763 "base_bdevs_list": [ 00:17:11.763 { 00:17:11.763 "name": "BaseBdev1", 00:17:11.763 "uuid": "07927ea1-b6fb-47a5-af07-5bdbdedea0bd", 00:17:11.763 "is_configured": true, 00:17:11.763 "data_offset": 2048, 00:17:11.763 "data_size": 63488 00:17:11.763 }, 00:17:11.763 { 00:17:11.763 "name": "BaseBdev2", 00:17:11.763 "uuid": "afca4f00-eda4-4195-b50a-e753b5f951a1", 00:17:11.763 "is_configured": true, 00:17:11.763 "data_offset": 2048, 00:17:11.763 "data_size": 63488 00:17:11.763 }, 00:17:11.763 { 00:17:11.763 "name": "BaseBdev3", 00:17:11.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.763 "is_configured": false, 00:17:11.763 "data_offset": 0, 00:17:11.763 "data_size": 0 00:17:11.763 } 00:17:11.763 ] 00:17:11.763 }' 00:17:11.763 16:32:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.763 16:32:48 -- common/autotest_common.sh@10 -- # set +x 00:17:12.331 16:32:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:12.589 [2024-07-11 16:32:49.371533] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:12.589 [2024-07-11 16:32:49.371876] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:12.589 BaseBdev3 00:17:12.589 [2024-07-11 16:32:49.372339] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:12.589 [2024-07-11 16:32:49.372559] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:12.589 [2024-07-11 16:32:49.377967] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:12.589 [2024-07-11 16:32:49.378272] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:12.589 [2024-07-11 16:32:49.378882] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.589 16:32:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:12.589 16:32:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:12.589 16:32:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:12.589 16:32:49 -- common/autotest_common.sh@889 -- # local i 00:17:12.589 16:32:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:12.589 16:32:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:12.589 16:32:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:12.849 16:32:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:13.108 [ 00:17:13.108 { 00:17:13.108 "name": "BaseBdev3", 00:17:13.108 "aliases": [ 00:17:13.108 "c1a54ebd-9ee0-497f-a6e4-4d882a4b5ffc" 00:17:13.108 ], 00:17:13.108 "product_name": "Malloc disk", 00:17:13.108 "block_size": 512, 00:17:13.108 "num_blocks": 65536, 00:17:13.108 "uuid": "c1a54ebd-9ee0-497f-a6e4-4d882a4b5ffc", 00:17:13.108 "assigned_rate_limits": { 00:17:13.108 "rw_ios_per_sec": 0, 00:17:13.108 "rw_mbytes_per_sec": 0, 00:17:13.108 "r_mbytes_per_sec": 0, 00:17:13.108 "w_mbytes_per_sec": 0 00:17:13.108 }, 00:17:13.108 "claimed": true, 00:17:13.108 "claim_type": "exclusive_write", 00:17:13.108 "zoned": false, 00:17:13.108 "supported_io_types": { 00:17:13.108 "read": true, 00:17:13.108 "write": true, 00:17:13.108 "unmap": true, 00:17:13.108 "write_zeroes": true, 00:17:13.108 "flush": true, 00:17:13.108 "reset": true, 00:17:13.108 "compare": false, 00:17:13.108 "compare_and_write": false, 00:17:13.108 "abort": true, 00:17:13.108 "nvme_admin": false, 00:17:13.108 "nvme_io": false 00:17:13.108 }, 00:17:13.108 "memory_domains": [ 00:17:13.108 { 00:17:13.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.108 "dma_device_type": 2 00:17:13.108 } 00:17:13.108 ], 00:17:13.108 "driver_specific": {} 00:17:13.108 } 00:17:13.108 ] 00:17:13.108 16:32:49 -- common/autotest_common.sh@895 -- # return 0 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.108 16:32:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.367 16:32:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:13.367 "name": "Existed_Raid", 00:17:13.367 "uuid": "10a37dca-ffb5-49b7-9900-897a292d01b4", 00:17:13.367 "strip_size_kb": 64, 00:17:13.367 "state": "online", 00:17:13.367 "raid_level": "concat", 00:17:13.367 "superblock": true, 00:17:13.367 "num_base_bdevs": 3, 00:17:13.367 "num_base_bdevs_discovered": 3, 00:17:13.367 "num_base_bdevs_operational": 3, 00:17:13.367 "base_bdevs_list": [ 00:17:13.367 { 00:17:13.367 "name": "BaseBdev1", 00:17:13.367 "uuid": "07927ea1-b6fb-47a5-af07-5bdbdedea0bd", 00:17:13.367 "is_configured": true, 00:17:13.367 "data_offset": 2048, 00:17:13.367 "data_size": 63488 00:17:13.367 }, 00:17:13.367 { 00:17:13.367 "name": "BaseBdev2", 00:17:13.367 "uuid": "afca4f00-eda4-4195-b50a-e753b5f951a1", 00:17:13.367 "is_configured": true, 00:17:13.367 "data_offset": 2048, 00:17:13.367 "data_size": 63488 00:17:13.367 }, 00:17:13.367 { 00:17:13.367 "name": "BaseBdev3", 00:17:13.367 "uuid": "c1a54ebd-9ee0-497f-a6e4-4d882a4b5ffc", 00:17:13.367 "is_configured": true, 00:17:13.367 "data_offset": 2048, 00:17:13.367 "data_size": 63488 00:17:13.367 } 00:17:13.367 ] 00:17:13.367 }' 00:17:13.367 16:32:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:13.367 16:32:49 -- common/autotest_common.sh@10 -- # set +x 00:17:13.935 16:32:50 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:14.194 [2024-07-11 16:32:50.902900] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:14.194 [2024-07-11 16:32:50.903040] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.194 [2024-07-11 16:32:50.903210] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.194 16:32:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.453 16:32:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.453 "name": "Existed_Raid", 00:17:14.453 "uuid": "10a37dca-ffb5-49b7-9900-897a292d01b4", 00:17:14.453 "strip_size_kb": 64, 00:17:14.453 "state": "offline", 00:17:14.453 "raid_level": "concat", 00:17:14.453 "superblock": true, 00:17:14.453 "num_base_bdevs": 3, 00:17:14.453 "num_base_bdevs_discovered": 2, 00:17:14.453 "num_base_bdevs_operational": 2, 00:17:14.453 "base_bdevs_list": [ 00:17:14.453 { 00:17:14.453 "name": null, 00:17:14.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.453 "is_configured": false, 00:17:14.453 "data_offset": 2048, 00:17:14.453 "data_size": 63488 00:17:14.453 }, 00:17:14.453 { 00:17:14.454 "name": "BaseBdev2", 00:17:14.454 "uuid": "afca4f00-eda4-4195-b50a-e753b5f951a1", 00:17:14.454 "is_configured": true, 00:17:14.454 "data_offset": 2048, 00:17:14.454 "data_size": 63488 00:17:14.454 }, 00:17:14.454 { 00:17:14.454 "name": "BaseBdev3", 00:17:14.454 "uuid": "c1a54ebd-9ee0-497f-a6e4-4d882a4b5ffc", 00:17:14.454 "is_configured": true, 00:17:14.454 "data_offset": 2048, 00:17:14.454 "data_size": 63488 00:17:14.454 } 00:17:14.454 ] 00:17:14.454 }' 00:17:14.454 16:32:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.454 16:32:51 -- common/autotest_common.sh@10 -- # set +x 00:17:15.389 16:32:51 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:15.389 16:32:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:15.389 16:32:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.389 16:32:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:15.389 16:32:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:15.389 16:32:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:15.389 16:32:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:15.647 [2024-07-11 16:32:52.322592] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:15.647 16:32:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:15.647 16:32:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:15.647 16:32:52 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.647 16:32:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:15.905 16:32:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:15.905 16:32:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:15.905 16:32:52 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:16.164 [2024-07-11 16:32:52.797608] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:16.164 [2024-07-11 16:32:52.797785] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:16.164 16:32:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:16.164 16:32:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:16.164 16:32:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.164 16:32:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:16.421 16:32:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:16.421 16:32:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:16.421 16:32:53 -- bdev/bdev_raid.sh@287 -- # killprocess 119191 00:17:16.421 16:32:53 -- common/autotest_common.sh@926 -- # '[' -z 119191 ']' 00:17:16.421 16:32:53 -- common/autotest_common.sh@930 -- # kill -0 119191 00:17:16.421 16:32:53 -- common/autotest_common.sh@931 -- # uname 00:17:16.421 16:32:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:16.421 16:32:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119191 00:17:16.421 killing process with pid 119191 00:17:16.421 16:32:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:16.421 16:32:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:16.421 16:32:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119191' 00:17:16.421 16:32:53 -- common/autotest_common.sh@945 -- # kill 119191 00:17:16.421 16:32:53 -- common/autotest_common.sh@950 -- # wait 119191 00:17:16.421 [2024-07-11 16:32:53.119505] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.421 [2024-07-11 16:32:53.119647] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.355 ************************************ 00:17:17.355 END TEST raid_state_function_test_sb 00:17:17.355 ************************************ 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:17.355 00:17:17.355 real 0m12.917s 00:17:17.355 user 0m23.158s 00:17:17.355 sys 0m1.388s 00:17:17.355 16:32:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.355 16:32:54 -- common/autotest_common.sh@10 -- # set +x 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:17:17.355 16:32:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:17.355 16:32:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:17.355 16:32:54 -- common/autotest_common.sh@10 -- # set +x 00:17:17.355 ************************************ 00:17:17.355 START TEST raid_superblock_test 00:17:17.355 ************************************ 00:17:17.355 16:32:54 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@357 -- # raid_pid=119600 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@358 -- # waitforlisten 119600 /var/tmp/spdk-raid.sock 00:17:17.355 16:32:54 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:17.355 16:32:54 -- common/autotest_common.sh@819 -- # '[' -z 119600 ']' 00:17:17.355 16:32:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:17.355 16:32:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:17.355 16:32:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:17.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:17.355 16:32:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:17.355 16:32:54 -- common/autotest_common.sh@10 -- # set +x 00:17:17.355 [2024-07-11 16:32:54.151528] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:17.355 [2024-07-11 16:32:54.151962] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119600 ] 00:17:17.614 [2024-07-11 16:32:54.323195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.872 [2024-07-11 16:32:54.533289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.131 [2024-07-11 16:32:54.697679] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.404 16:32:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:18.404 16:32:55 -- common/autotest_common.sh@852 -- # return 0 00:17:18.404 16:32:55 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:18.404 16:32:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:18.404 16:32:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:18.404 16:32:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:18.404 16:32:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:18.404 16:32:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.404 16:32:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.404 16:32:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.404 16:32:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:18.673 malloc1 00:17:18.673 16:32:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.932 [2024-07-11 16:32:55.534494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.932 [2024-07-11 16:32:55.534712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.932 [2024-07-11 16:32:55.534847] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:18.932 [2024-07-11 16:32:55.534975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.932 [2024-07-11 16:32:55.537078] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.932 [2024-07-11 16:32:55.537237] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.932 pt1 00:17:18.932 16:32:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:18.932 16:32:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:18.932 16:32:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:18.932 16:32:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:18.932 16:32:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:18.932 16:32:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.932 16:32:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.932 16:32:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.932 16:32:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:19.191 malloc2 00:17:19.191 16:32:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:19.450 [2024-07-11 16:32:56.103293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:19.450 [2024-07-11 16:32:56.103496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.450 [2024-07-11 16:32:56.103569] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:19.450 [2024-07-11 16:32:56.103851] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.450 [2024-07-11 16:32:56.105920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.450 [2024-07-11 16:32:56.106095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:19.450 pt2 00:17:19.450 16:32:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:19.450 16:32:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:19.450 16:32:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:19.450 16:32:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:19.450 16:32:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:19.450 16:32:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:19.450 16:32:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:19.450 16:32:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:19.450 16:32:56 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:19.708 malloc3 00:17:19.708 16:32:56 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:19.967 [2024-07-11 16:32:56.547675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:19.967 [2024-07-11 16:32:56.547877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.967 [2024-07-11 16:32:56.547946] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:19.967 [2024-07-11 16:32:56.548193] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.967 [2024-07-11 16:32:56.550253] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.967 [2024-07-11 16:32:56.550433] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:19.967 pt3 00:17:19.967 16:32:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:19.967 16:32:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:19.967 16:32:56 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:19.967 [2024-07-11 16:32:56.723718] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.967 [2024-07-11 16:32:56.725357] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.967 [2024-07-11 16:32:56.725559] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:19.967 [2024-07-11 16:32:56.725787] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:19.967 [2024-07-11 16:32:56.725914] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:19.967 [2024-07-11 16:32:56.726073] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:19.968 [2024-07-11 16:32:56.726412] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:19.968 [2024-07-11 16:32:56.726523] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:19.968 [2024-07-11 16:32:56.726736] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.968 16:32:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.226 16:32:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.226 "name": "raid_bdev1", 00:17:20.226 "uuid": "759754de-2d4d-48c1-afe5-99b1bc1b639e", 00:17:20.226 "strip_size_kb": 64, 00:17:20.226 "state": "online", 00:17:20.226 "raid_level": "concat", 00:17:20.226 "superblock": true, 00:17:20.226 "num_base_bdevs": 3, 00:17:20.226 "num_base_bdevs_discovered": 3, 00:17:20.226 "num_base_bdevs_operational": 3, 00:17:20.226 "base_bdevs_list": [ 00:17:20.226 { 00:17:20.226 "name": "pt1", 00:17:20.226 "uuid": "6cd1ba4f-101d-5e39-ba18-75136f8fa281", 00:17:20.226 "is_configured": true, 00:17:20.226 "data_offset": 2048, 00:17:20.227 "data_size": 63488 00:17:20.227 }, 00:17:20.227 { 00:17:20.227 "name": "pt2", 00:17:20.227 "uuid": "c9e364c2-8c5e-51f1-87c9-a523b3407301", 00:17:20.227 "is_configured": true, 00:17:20.227 "data_offset": 2048, 00:17:20.227 "data_size": 63488 00:17:20.227 }, 00:17:20.227 { 00:17:20.227 "name": "pt3", 00:17:20.227 "uuid": "f021b225-c11d-5265-856c-51117dcb7dbb", 00:17:20.227 "is_configured": true, 00:17:20.227 "data_offset": 2048, 00:17:20.227 "data_size": 63488 00:17:20.227 } 00:17:20.227 ] 00:17:20.227 }' 00:17:20.227 16:32:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.227 16:32:56 -- common/autotest_common.sh@10 -- # set +x 00:17:21.163 16:32:57 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:21.163 16:32:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:21.163 [2024-07-11 16:32:57.772024] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.163 16:32:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=759754de-2d4d-48c1-afe5-99b1bc1b639e 00:17:21.163 16:32:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 759754de-2d4d-48c1-afe5-99b1bc1b639e ']' 00:17:21.163 16:32:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:21.163 [2024-07-11 16:32:57.939869] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.163 [2024-07-11 16:32:57.939994] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.163 [2024-07-11 16:32:57.940142] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.163 [2024-07-11 16:32:57.940288] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.163 [2024-07-11 16:32:57.940380] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:21.163 16:32:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:21.163 16:32:57 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.422 16:32:58 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:21.422 16:32:58 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:21.422 16:32:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:21.422 16:32:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:21.680 16:32:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:21.680 16:32:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:21.939 16:32:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:21.939 16:32:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:21.939 16:32:58 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:21.939 16:32:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:22.198 16:32:58 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:22.198 16:32:58 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:22.198 16:32:58 -- common/autotest_common.sh@640 -- # local es=0 00:17:22.198 16:32:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:22.198 16:32:58 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.198 16:32:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:22.198 16:32:58 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.198 16:32:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:22.198 16:32:58 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.198 16:32:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:22.198 16:32:58 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:22.198 16:32:58 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:22.198 16:32:58 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:22.457 [2024-07-11 16:32:59.100076] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:22.457 [2024-07-11 16:32:59.101719] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:22.457 [2024-07-11 16:32:59.101773] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:22.457 [2024-07-11 16:32:59.101821] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:22.457 [2024-07-11 16:32:59.101898] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:22.457 [2024-07-11 16:32:59.101932] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:22.457 [2024-07-11 16:32:59.102024] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.457 [2024-07-11 16:32:59.102036] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:17:22.457 request: 00:17:22.457 { 00:17:22.457 "name": "raid_bdev1", 00:17:22.457 "raid_level": "concat", 00:17:22.457 "base_bdevs": [ 00:17:22.457 "malloc1", 00:17:22.457 "malloc2", 00:17:22.457 "malloc3" 00:17:22.457 ], 00:17:22.457 "superblock": false, 00:17:22.457 "strip_size_kb": 64, 00:17:22.457 "method": "bdev_raid_create", 00:17:22.457 "req_id": 1 00:17:22.457 } 00:17:22.457 Got JSON-RPC error response 00:17:22.457 response: 00:17:22.457 { 00:17:22.457 "code": -17, 00:17:22.457 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:22.457 } 00:17:22.457 16:32:59 -- common/autotest_common.sh@643 -- # es=1 00:17:22.457 16:32:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:22.457 16:32:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:22.457 16:32:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:22.457 16:32:59 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.457 16:32:59 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:22.716 16:32:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:22.716 16:32:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:22.716 16:32:59 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:22.716 [2024-07-11 16:32:59.524136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:22.716 [2024-07-11 16:32:59.524242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.716 [2024-07-11 16:32:59.524279] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:22.716 [2024-07-11 16:32:59.524300] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.975 [2024-07-11 16:32:59.526569] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.975 [2024-07-11 16:32:59.526632] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:22.975 [2024-07-11 16:32:59.526757] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:22.975 [2024-07-11 16:32:59.526839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:22.975 pt1 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.975 16:32:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.975 "name": "raid_bdev1", 00:17:22.975 "uuid": "759754de-2d4d-48c1-afe5-99b1bc1b639e", 00:17:22.975 "strip_size_kb": 64, 00:17:22.975 "state": "configuring", 00:17:22.975 "raid_level": "concat", 00:17:22.975 "superblock": true, 00:17:22.975 "num_base_bdevs": 3, 00:17:22.975 "num_base_bdevs_discovered": 1, 00:17:22.976 "num_base_bdevs_operational": 3, 00:17:22.976 "base_bdevs_list": [ 00:17:22.976 { 00:17:22.976 "name": "pt1", 00:17:22.976 "uuid": "6cd1ba4f-101d-5e39-ba18-75136f8fa281", 00:17:22.976 "is_configured": true, 00:17:22.976 "data_offset": 2048, 00:17:22.976 "data_size": 63488 00:17:22.976 }, 00:17:22.976 { 00:17:22.976 "name": null, 00:17:22.976 "uuid": "c9e364c2-8c5e-51f1-87c9-a523b3407301", 00:17:22.976 "is_configured": false, 00:17:22.976 "data_offset": 2048, 00:17:22.976 "data_size": 63488 00:17:22.976 }, 00:17:22.976 { 00:17:22.976 "name": null, 00:17:22.976 "uuid": "f021b225-c11d-5265-856c-51117dcb7dbb", 00:17:22.976 "is_configured": false, 00:17:22.976 "data_offset": 2048, 00:17:22.976 "data_size": 63488 00:17:22.976 } 00:17:22.976 ] 00:17:22.976 }' 00:17:22.976 16:32:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.976 16:32:59 -- common/autotest_common.sh@10 -- # set +x 00:17:23.543 16:33:00 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:23.543 16:33:00 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:23.802 [2024-07-11 16:33:00.456273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:23.802 [2024-07-11 16:33:00.456357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.802 [2024-07-11 16:33:00.456395] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:23.802 [2024-07-11 16:33:00.456414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.802 [2024-07-11 16:33:00.456850] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.802 [2024-07-11 16:33:00.456889] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:23.802 [2024-07-11 16:33:00.457060] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:23.802 [2024-07-11 16:33:00.457089] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.802 pt2 00:17:23.802 16:33:00 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:24.060 [2024-07-11 16:33:00.724336] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.060 16:33:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.319 16:33:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.319 "name": "raid_bdev1", 00:17:24.319 "uuid": "759754de-2d4d-48c1-afe5-99b1bc1b639e", 00:17:24.319 "strip_size_kb": 64, 00:17:24.319 "state": "configuring", 00:17:24.319 "raid_level": "concat", 00:17:24.319 "superblock": true, 00:17:24.319 "num_base_bdevs": 3, 00:17:24.319 "num_base_bdevs_discovered": 1, 00:17:24.319 "num_base_bdevs_operational": 3, 00:17:24.319 "base_bdevs_list": [ 00:17:24.319 { 00:17:24.319 "name": "pt1", 00:17:24.319 "uuid": "6cd1ba4f-101d-5e39-ba18-75136f8fa281", 00:17:24.319 "is_configured": true, 00:17:24.319 "data_offset": 2048, 00:17:24.319 "data_size": 63488 00:17:24.319 }, 00:17:24.319 { 00:17:24.319 "name": null, 00:17:24.319 "uuid": "c9e364c2-8c5e-51f1-87c9-a523b3407301", 00:17:24.319 "is_configured": false, 00:17:24.319 "data_offset": 2048, 00:17:24.319 "data_size": 63488 00:17:24.319 }, 00:17:24.319 { 00:17:24.319 "name": null, 00:17:24.319 "uuid": "f021b225-c11d-5265-856c-51117dcb7dbb", 00:17:24.319 "is_configured": false, 00:17:24.319 "data_offset": 2048, 00:17:24.319 "data_size": 63488 00:17:24.319 } 00:17:24.319 ] 00:17:24.319 }' 00:17:24.319 16:33:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.319 16:33:00 -- common/autotest_common.sh@10 -- # set +x 00:17:24.886 16:33:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:24.886 16:33:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:24.886 16:33:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:25.146 [2024-07-11 16:33:01.756502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:25.146 [2024-07-11 16:33:01.756564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.146 [2024-07-11 16:33:01.756594] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:25.146 [2024-07-11 16:33:01.756624] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.146 [2024-07-11 16:33:01.757042] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.146 [2024-07-11 16:33:01.757077] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:25.146 [2024-07-11 16:33:01.757173] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:25.146 [2024-07-11 16:33:01.757199] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:25.146 pt2 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:25.146 [2024-07-11 16:33:01.936528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:25.146 [2024-07-11 16:33:01.936578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.146 [2024-07-11 16:33:01.936605] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:25.146 [2024-07-11 16:33:01.936625] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.146 [2024-07-11 16:33:01.936950] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.146 [2024-07-11 16:33:01.936998] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:25.146 [2024-07-11 16:33:01.937090] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:25.146 [2024-07-11 16:33:01.937114] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:25.146 [2024-07-11 16:33:01.937217] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:25.146 [2024-07-11 16:33:01.937229] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:25.146 [2024-07-11 16:33:01.937323] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:25.146 [2024-07-11 16:33:01.937677] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:25.146 [2024-07-11 16:33:01.937701] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:25.146 [2024-07-11 16:33:01.937822] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.146 pt3 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.146 16:33:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.405 16:33:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.405 "name": "raid_bdev1", 00:17:25.405 "uuid": "759754de-2d4d-48c1-afe5-99b1bc1b639e", 00:17:25.405 "strip_size_kb": 64, 00:17:25.405 "state": "online", 00:17:25.405 "raid_level": "concat", 00:17:25.405 "superblock": true, 00:17:25.405 "num_base_bdevs": 3, 00:17:25.405 "num_base_bdevs_discovered": 3, 00:17:25.405 "num_base_bdevs_operational": 3, 00:17:25.405 "base_bdevs_list": [ 00:17:25.405 { 00:17:25.405 "name": "pt1", 00:17:25.405 "uuid": "6cd1ba4f-101d-5e39-ba18-75136f8fa281", 00:17:25.405 "is_configured": true, 00:17:25.405 "data_offset": 2048, 00:17:25.405 "data_size": 63488 00:17:25.405 }, 00:17:25.405 { 00:17:25.405 "name": "pt2", 00:17:25.405 "uuid": "c9e364c2-8c5e-51f1-87c9-a523b3407301", 00:17:25.405 "is_configured": true, 00:17:25.405 "data_offset": 2048, 00:17:25.405 "data_size": 63488 00:17:25.405 }, 00:17:25.405 { 00:17:25.405 "name": "pt3", 00:17:25.405 "uuid": "f021b225-c11d-5265-856c-51117dcb7dbb", 00:17:25.405 "is_configured": true, 00:17:25.405 "data_offset": 2048, 00:17:25.405 "data_size": 63488 00:17:25.405 } 00:17:25.405 ] 00:17:25.405 }' 00:17:25.405 16:33:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.405 16:33:02 -- common/autotest_common.sh@10 -- # set +x 00:17:26.339 16:33:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:26.339 16:33:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:26.339 [2024-07-11 16:33:03.036951] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.339 16:33:03 -- bdev/bdev_raid.sh@430 -- # '[' 759754de-2d4d-48c1-afe5-99b1bc1b639e '!=' 759754de-2d4d-48c1-afe5-99b1bc1b639e ']' 00:17:26.339 16:33:03 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:17:26.339 16:33:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:26.339 16:33:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:26.339 16:33:03 -- bdev/bdev_raid.sh@511 -- # killprocess 119600 00:17:26.339 16:33:03 -- common/autotest_common.sh@926 -- # '[' -z 119600 ']' 00:17:26.339 16:33:03 -- common/autotest_common.sh@930 -- # kill -0 119600 00:17:26.339 16:33:03 -- common/autotest_common.sh@931 -- # uname 00:17:26.339 16:33:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:26.339 16:33:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119600 00:17:26.339 killing process with pid 119600 00:17:26.339 16:33:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:26.339 16:33:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:26.339 16:33:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119600' 00:17:26.339 16:33:03 -- common/autotest_common.sh@945 -- # kill 119600 00:17:26.339 16:33:03 -- common/autotest_common.sh@950 -- # wait 119600 00:17:26.339 [2024-07-11 16:33:03.071912] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:26.339 [2024-07-11 16:33:03.071973] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.339 [2024-07-11 16:33:03.072054] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.339 [2024-07-11 16:33:03.072071] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:26.596 [2024-07-11 16:33:03.259897] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.532 ************************************ 00:17:27.532 END TEST raid_superblock_test 00:17:27.532 ************************************ 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:27.532 00:17:27.532 real 0m10.080s 00:17:27.532 user 0m17.858s 00:17:27.532 sys 0m1.023s 00:17:27.532 16:33:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.532 16:33:04 -- common/autotest_common.sh@10 -- # set +x 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:17:27.532 16:33:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:27.532 16:33:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:27.532 16:33:04 -- common/autotest_common.sh@10 -- # set +x 00:17:27.532 ************************************ 00:17:27.532 START TEST raid_state_function_test 00:17:27.532 ************************************ 00:17:27.532 16:33:04 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=119922 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:27.532 Process raid pid: 119922 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119922' 00:17:27.532 16:33:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119922 /var/tmp/spdk-raid.sock 00:17:27.532 16:33:04 -- common/autotest_common.sh@819 -- # '[' -z 119922 ']' 00:17:27.532 16:33:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:27.532 16:33:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:27.532 16:33:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:27.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:27.532 16:33:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:27.532 16:33:04 -- common/autotest_common.sh@10 -- # set +x 00:17:27.532 [2024-07-11 16:33:04.275626] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:27.532 [2024-07-11 16:33:04.275810] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.790 [2024-07-11 16:33:04.439740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.047 [2024-07-11 16:33:04.599332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.047 [2024-07-11 16:33:04.768738] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.613 16:33:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:28.613 16:33:05 -- common/autotest_common.sh@852 -- # return 0 00:17:28.613 16:33:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:28.874 [2024-07-11 16:33:05.434463] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.874 [2024-07-11 16:33:05.434548] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.874 [2024-07-11 16:33:05.434561] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.874 [2024-07-11 16:33:05.434579] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.874 [2024-07-11 16:33:05.434586] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:28.874 [2024-07-11 16:33:05.434620] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.874 "name": "Existed_Raid", 00:17:28.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.874 "strip_size_kb": 0, 00:17:28.874 "state": "configuring", 00:17:28.874 "raid_level": "raid1", 00:17:28.874 "superblock": false, 00:17:28.874 "num_base_bdevs": 3, 00:17:28.874 "num_base_bdevs_discovered": 0, 00:17:28.874 "num_base_bdevs_operational": 3, 00:17:28.874 "base_bdevs_list": [ 00:17:28.874 { 00:17:28.874 "name": "BaseBdev1", 00:17:28.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.874 "is_configured": false, 00:17:28.874 "data_offset": 0, 00:17:28.874 "data_size": 0 00:17:28.874 }, 00:17:28.874 { 00:17:28.874 "name": "BaseBdev2", 00:17:28.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.874 "is_configured": false, 00:17:28.874 "data_offset": 0, 00:17:28.874 "data_size": 0 00:17:28.874 }, 00:17:28.874 { 00:17:28.874 "name": "BaseBdev3", 00:17:28.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.874 "is_configured": false, 00:17:28.874 "data_offset": 0, 00:17:28.874 "data_size": 0 00:17:28.874 } 00:17:28.874 ] 00:17:28.874 }' 00:17:28.874 16:33:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.874 16:33:05 -- common/autotest_common.sh@10 -- # set +x 00:17:29.806 16:33:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:29.806 [2024-07-11 16:33:06.447235] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.806 [2024-07-11 16:33:06.447274] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:29.806 16:33:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:30.062 [2024-07-11 16:33:06.623263] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.062 [2024-07-11 16:33:06.623322] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.062 [2024-07-11 16:33:06.623349] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.062 [2024-07-11 16:33:06.623366] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.062 [2024-07-11 16:33:06.623373] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:30.062 [2024-07-11 16:33:06.623414] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:30.062 16:33:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:30.320 [2024-07-11 16:33:06.896775] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.320 BaseBdev1 00:17:30.320 16:33:06 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:30.320 16:33:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:30.320 16:33:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:30.320 16:33:06 -- common/autotest_common.sh@889 -- # local i 00:17:30.320 16:33:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:30.320 16:33:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:30.320 16:33:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.579 16:33:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.579 [ 00:17:30.579 { 00:17:30.579 "name": "BaseBdev1", 00:17:30.579 "aliases": [ 00:17:30.579 "e150c949-382e-4d25-afe1-543d2df8a668" 00:17:30.579 ], 00:17:30.579 "product_name": "Malloc disk", 00:17:30.579 "block_size": 512, 00:17:30.579 "num_blocks": 65536, 00:17:30.579 "uuid": "e150c949-382e-4d25-afe1-543d2df8a668", 00:17:30.579 "assigned_rate_limits": { 00:17:30.579 "rw_ios_per_sec": 0, 00:17:30.579 "rw_mbytes_per_sec": 0, 00:17:30.579 "r_mbytes_per_sec": 0, 00:17:30.579 "w_mbytes_per_sec": 0 00:17:30.579 }, 00:17:30.579 "claimed": true, 00:17:30.579 "claim_type": "exclusive_write", 00:17:30.579 "zoned": false, 00:17:30.579 "supported_io_types": { 00:17:30.579 "read": true, 00:17:30.579 "write": true, 00:17:30.579 "unmap": true, 00:17:30.579 "write_zeroes": true, 00:17:30.579 "flush": true, 00:17:30.579 "reset": true, 00:17:30.579 "compare": false, 00:17:30.579 "compare_and_write": false, 00:17:30.579 "abort": true, 00:17:30.579 "nvme_admin": false, 00:17:30.579 "nvme_io": false 00:17:30.579 }, 00:17:30.579 "memory_domains": [ 00:17:30.579 { 00:17:30.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.579 "dma_device_type": 2 00:17:30.579 } 00:17:30.579 ], 00:17:30.579 "driver_specific": {} 00:17:30.579 } 00:17:30.579 ] 00:17:30.579 16:33:07 -- common/autotest_common.sh@895 -- # return 0 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.579 16:33:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.837 16:33:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:30.837 "name": "Existed_Raid", 00:17:30.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.837 "strip_size_kb": 0, 00:17:30.837 "state": "configuring", 00:17:30.837 "raid_level": "raid1", 00:17:30.837 "superblock": false, 00:17:30.837 "num_base_bdevs": 3, 00:17:30.837 "num_base_bdevs_discovered": 1, 00:17:30.837 "num_base_bdevs_operational": 3, 00:17:30.837 "base_bdevs_list": [ 00:17:30.837 { 00:17:30.837 "name": "BaseBdev1", 00:17:30.837 "uuid": "e150c949-382e-4d25-afe1-543d2df8a668", 00:17:30.837 "is_configured": true, 00:17:30.837 "data_offset": 0, 00:17:30.837 "data_size": 65536 00:17:30.837 }, 00:17:30.837 { 00:17:30.837 "name": "BaseBdev2", 00:17:30.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.837 "is_configured": false, 00:17:30.837 "data_offset": 0, 00:17:30.837 "data_size": 0 00:17:30.837 }, 00:17:30.837 { 00:17:30.837 "name": "BaseBdev3", 00:17:30.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.837 "is_configured": false, 00:17:30.837 "data_offset": 0, 00:17:30.837 "data_size": 0 00:17:30.837 } 00:17:30.837 ] 00:17:30.837 }' 00:17:30.837 16:33:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:30.837 16:33:07 -- common/autotest_common.sh@10 -- # set +x 00:17:31.789 16:33:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:31.789 [2024-07-11 16:33:08.461104] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:31.789 [2024-07-11 16:33:08.461150] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:31.789 16:33:08 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:31.789 16:33:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:32.046 [2024-07-11 16:33:08.713182] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.046 [2024-07-11 16:33:08.714793] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.046 [2024-07-11 16:33:08.714847] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.046 [2024-07-11 16:33:08.714874] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:32.046 [2024-07-11 16:33:08.714895] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.046 16:33:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.304 16:33:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.304 "name": "Existed_Raid", 00:17:32.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.304 "strip_size_kb": 0, 00:17:32.304 "state": "configuring", 00:17:32.304 "raid_level": "raid1", 00:17:32.304 "superblock": false, 00:17:32.304 "num_base_bdevs": 3, 00:17:32.304 "num_base_bdevs_discovered": 1, 00:17:32.304 "num_base_bdevs_operational": 3, 00:17:32.304 "base_bdevs_list": [ 00:17:32.304 { 00:17:32.304 "name": "BaseBdev1", 00:17:32.304 "uuid": "e150c949-382e-4d25-afe1-543d2df8a668", 00:17:32.304 "is_configured": true, 00:17:32.304 "data_offset": 0, 00:17:32.304 "data_size": 65536 00:17:32.304 }, 00:17:32.304 { 00:17:32.304 "name": "BaseBdev2", 00:17:32.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.304 "is_configured": false, 00:17:32.304 "data_offset": 0, 00:17:32.304 "data_size": 0 00:17:32.304 }, 00:17:32.304 { 00:17:32.304 "name": "BaseBdev3", 00:17:32.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.304 "is_configured": false, 00:17:32.304 "data_offset": 0, 00:17:32.304 "data_size": 0 00:17:32.304 } 00:17:32.304 ] 00:17:32.304 }' 00:17:32.304 16:33:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.304 16:33:08 -- common/autotest_common.sh@10 -- # set +x 00:17:32.870 16:33:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:33.129 [2024-07-11 16:33:09.757855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:33.129 BaseBdev2 00:17:33.129 16:33:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:33.129 16:33:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:33.129 16:33:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:33.129 16:33:09 -- common/autotest_common.sh@889 -- # local i 00:17:33.129 16:33:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:33.129 16:33:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:33.129 16:33:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.387 16:33:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:33.644 [ 00:17:33.644 { 00:17:33.644 "name": "BaseBdev2", 00:17:33.644 "aliases": [ 00:17:33.644 "b6b7f0ab-b2de-4c01-9396-3176097e4e19" 00:17:33.644 ], 00:17:33.644 "product_name": "Malloc disk", 00:17:33.644 "block_size": 512, 00:17:33.644 "num_blocks": 65536, 00:17:33.644 "uuid": "b6b7f0ab-b2de-4c01-9396-3176097e4e19", 00:17:33.644 "assigned_rate_limits": { 00:17:33.644 "rw_ios_per_sec": 0, 00:17:33.644 "rw_mbytes_per_sec": 0, 00:17:33.644 "r_mbytes_per_sec": 0, 00:17:33.644 "w_mbytes_per_sec": 0 00:17:33.644 }, 00:17:33.644 "claimed": true, 00:17:33.644 "claim_type": "exclusive_write", 00:17:33.644 "zoned": false, 00:17:33.644 "supported_io_types": { 00:17:33.644 "read": true, 00:17:33.644 "write": true, 00:17:33.644 "unmap": true, 00:17:33.644 "write_zeroes": true, 00:17:33.644 "flush": true, 00:17:33.644 "reset": true, 00:17:33.644 "compare": false, 00:17:33.644 "compare_and_write": false, 00:17:33.644 "abort": true, 00:17:33.644 "nvme_admin": false, 00:17:33.644 "nvme_io": false 00:17:33.644 }, 00:17:33.644 "memory_domains": [ 00:17:33.644 { 00:17:33.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.645 "dma_device_type": 2 00:17:33.645 } 00:17:33.645 ], 00:17:33.645 "driver_specific": {} 00:17:33.645 } 00:17:33.645 ] 00:17:33.645 16:33:10 -- common/autotest_common.sh@895 -- # return 0 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.645 16:33:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.902 16:33:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.902 "name": "Existed_Raid", 00:17:33.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.902 "strip_size_kb": 0, 00:17:33.902 "state": "configuring", 00:17:33.902 "raid_level": "raid1", 00:17:33.902 "superblock": false, 00:17:33.902 "num_base_bdevs": 3, 00:17:33.902 "num_base_bdevs_discovered": 2, 00:17:33.902 "num_base_bdevs_operational": 3, 00:17:33.902 "base_bdevs_list": [ 00:17:33.902 { 00:17:33.902 "name": "BaseBdev1", 00:17:33.902 "uuid": "e150c949-382e-4d25-afe1-543d2df8a668", 00:17:33.902 "is_configured": true, 00:17:33.902 "data_offset": 0, 00:17:33.902 "data_size": 65536 00:17:33.902 }, 00:17:33.902 { 00:17:33.902 "name": "BaseBdev2", 00:17:33.902 "uuid": "b6b7f0ab-b2de-4c01-9396-3176097e4e19", 00:17:33.902 "is_configured": true, 00:17:33.902 "data_offset": 0, 00:17:33.902 "data_size": 65536 00:17:33.902 }, 00:17:33.902 { 00:17:33.902 "name": "BaseBdev3", 00:17:33.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.902 "is_configured": false, 00:17:33.902 "data_offset": 0, 00:17:33.902 "data_size": 0 00:17:33.902 } 00:17:33.902 ] 00:17:33.902 }' 00:17:33.902 16:33:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.902 16:33:10 -- common/autotest_common.sh@10 -- # set +x 00:17:34.467 16:33:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:34.724 [2024-07-11 16:33:11.288461] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.724 [2024-07-11 16:33:11.288512] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:34.724 [2024-07-11 16:33:11.288520] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:34.724 [2024-07-11 16:33:11.288640] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:17:34.724 [2024-07-11 16:33:11.289048] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:34.724 [2024-07-11 16:33:11.289074] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:34.724 [2024-07-11 16:33:11.289301] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.724 BaseBdev3 00:17:34.724 16:33:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:34.724 16:33:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:34.724 16:33:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:34.725 16:33:11 -- common/autotest_common.sh@889 -- # local i 00:17:34.725 16:33:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:34.725 16:33:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:34.725 16:33:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:34.983 16:33:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:34.983 [ 00:17:34.983 { 00:17:34.983 "name": "BaseBdev3", 00:17:34.983 "aliases": [ 00:17:34.983 "2055cd2e-34b8-465a-a905-18e344404ae8" 00:17:34.983 ], 00:17:34.983 "product_name": "Malloc disk", 00:17:34.983 "block_size": 512, 00:17:34.983 "num_blocks": 65536, 00:17:34.983 "uuid": "2055cd2e-34b8-465a-a905-18e344404ae8", 00:17:34.983 "assigned_rate_limits": { 00:17:34.983 "rw_ios_per_sec": 0, 00:17:34.983 "rw_mbytes_per_sec": 0, 00:17:34.983 "r_mbytes_per_sec": 0, 00:17:34.983 "w_mbytes_per_sec": 0 00:17:34.983 }, 00:17:34.983 "claimed": true, 00:17:34.983 "claim_type": "exclusive_write", 00:17:34.983 "zoned": false, 00:17:34.983 "supported_io_types": { 00:17:34.983 "read": true, 00:17:34.983 "write": true, 00:17:34.983 "unmap": true, 00:17:34.983 "write_zeroes": true, 00:17:34.983 "flush": true, 00:17:34.983 "reset": true, 00:17:34.983 "compare": false, 00:17:34.983 "compare_and_write": false, 00:17:34.983 "abort": true, 00:17:34.983 "nvme_admin": false, 00:17:34.983 "nvme_io": false 00:17:34.983 }, 00:17:34.983 "memory_domains": [ 00:17:34.983 { 00:17:34.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.983 "dma_device_type": 2 00:17:34.983 } 00:17:34.983 ], 00:17:34.983 "driver_specific": {} 00:17:34.983 } 00:17:34.983 ] 00:17:34.983 16:33:11 -- common/autotest_common.sh@895 -- # return 0 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.983 16:33:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.242 16:33:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.242 "name": "Existed_Raid", 00:17:35.242 "uuid": "6db47d2b-5db8-420c-9434-b1d4c966f8cf", 00:17:35.242 "strip_size_kb": 0, 00:17:35.242 "state": "online", 00:17:35.242 "raid_level": "raid1", 00:17:35.242 "superblock": false, 00:17:35.242 "num_base_bdevs": 3, 00:17:35.242 "num_base_bdevs_discovered": 3, 00:17:35.242 "num_base_bdevs_operational": 3, 00:17:35.242 "base_bdevs_list": [ 00:17:35.242 { 00:17:35.242 "name": "BaseBdev1", 00:17:35.242 "uuid": "e150c949-382e-4d25-afe1-543d2df8a668", 00:17:35.242 "is_configured": true, 00:17:35.242 "data_offset": 0, 00:17:35.242 "data_size": 65536 00:17:35.242 }, 00:17:35.242 { 00:17:35.242 "name": "BaseBdev2", 00:17:35.242 "uuid": "b6b7f0ab-b2de-4c01-9396-3176097e4e19", 00:17:35.242 "is_configured": true, 00:17:35.242 "data_offset": 0, 00:17:35.242 "data_size": 65536 00:17:35.242 }, 00:17:35.242 { 00:17:35.242 "name": "BaseBdev3", 00:17:35.242 "uuid": "2055cd2e-34b8-465a-a905-18e344404ae8", 00:17:35.242 "is_configured": true, 00:17:35.242 "data_offset": 0, 00:17:35.242 "data_size": 65536 00:17:35.242 } 00:17:35.242 ] 00:17:35.242 }' 00:17:35.242 16:33:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.242 16:33:11 -- common/autotest_common.sh@10 -- # set +x 00:17:35.817 16:33:12 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:36.075 [2024-07-11 16:33:12.824617] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.334 16:33:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.334 16:33:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.334 "name": "Existed_Raid", 00:17:36.335 "uuid": "6db47d2b-5db8-420c-9434-b1d4c966f8cf", 00:17:36.335 "strip_size_kb": 0, 00:17:36.335 "state": "online", 00:17:36.335 "raid_level": "raid1", 00:17:36.335 "superblock": false, 00:17:36.335 "num_base_bdevs": 3, 00:17:36.335 "num_base_bdevs_discovered": 2, 00:17:36.335 "num_base_bdevs_operational": 2, 00:17:36.335 "base_bdevs_list": [ 00:17:36.335 { 00:17:36.335 "name": null, 00:17:36.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.335 "is_configured": false, 00:17:36.335 "data_offset": 0, 00:17:36.335 "data_size": 65536 00:17:36.335 }, 00:17:36.335 { 00:17:36.335 "name": "BaseBdev2", 00:17:36.335 "uuid": "b6b7f0ab-b2de-4c01-9396-3176097e4e19", 00:17:36.335 "is_configured": true, 00:17:36.335 "data_offset": 0, 00:17:36.335 "data_size": 65536 00:17:36.335 }, 00:17:36.335 { 00:17:36.335 "name": "BaseBdev3", 00:17:36.335 "uuid": "2055cd2e-34b8-465a-a905-18e344404ae8", 00:17:36.335 "is_configured": true, 00:17:36.335 "data_offset": 0, 00:17:36.335 "data_size": 65536 00:17:36.335 } 00:17:36.335 ] 00:17:36.335 }' 00:17:36.335 16:33:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.335 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:17:36.899 16:33:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:36.899 16:33:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:36.899 16:33:13 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.899 16:33:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:37.155 16:33:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:37.155 16:33:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:37.155 16:33:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:37.413 [2024-07-11 16:33:14.121282] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:37.413 16:33:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:37.413 16:33:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:37.413 16:33:14 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.413 16:33:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:37.670 16:33:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:37.670 16:33:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:37.670 16:33:14 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:37.927 [2024-07-11 16:33:14.661368] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:37.927 [2024-07-11 16:33:14.661408] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.927 [2024-07-11 16:33:14.661474] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.927 [2024-07-11 16:33:14.724216] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.927 [2024-07-11 16:33:14.724252] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:37.927 16:33:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:37.927 16:33:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:38.184 16:33:14 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.184 16:33:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:38.184 16:33:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:38.184 16:33:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:38.184 16:33:14 -- bdev/bdev_raid.sh@287 -- # killprocess 119922 00:17:38.184 16:33:14 -- common/autotest_common.sh@926 -- # '[' -z 119922 ']' 00:17:38.184 16:33:14 -- common/autotest_common.sh@930 -- # kill -0 119922 00:17:38.184 16:33:14 -- common/autotest_common.sh@931 -- # uname 00:17:38.184 16:33:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.184 16:33:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 119922 00:17:38.184 killing process with pid 119922 00:17:38.184 16:33:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:38.184 16:33:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:38.184 16:33:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 119922' 00:17:38.184 16:33:14 -- common/autotest_common.sh@945 -- # kill 119922 00:17:38.184 16:33:14 -- common/autotest_common.sh@950 -- # wait 119922 00:17:38.184 [2024-07-11 16:33:14.984203] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:38.184 [2024-07-11 16:33:14.984293] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:39.114 ************************************ 00:17:39.114 END TEST raid_state_function_test 00:17:39.114 ************************************ 00:17:39.114 16:33:15 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:39.114 00:17:39.114 real 0m11.678s 00:17:39.114 user 0m20.948s 00:17:39.114 sys 0m1.172s 00:17:39.114 16:33:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.114 16:33:15 -- common/autotest_common.sh@10 -- # set +x 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:17:39.372 16:33:15 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:17:39.372 16:33:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:39.372 16:33:15 -- common/autotest_common.sh@10 -- # set +x 00:17:39.372 ************************************ 00:17:39.372 START TEST raid_state_function_test_sb 00:17:39.372 ************************************ 00:17:39.372 16:33:15 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:39.372 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@226 -- # raid_pid=120317 00:17:39.373 Process raid pid: 120317 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120317' 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120317 /var/tmp/spdk-raid.sock 00:17:39.373 16:33:15 -- common/autotest_common.sh@819 -- # '[' -z 120317 ']' 00:17:39.373 16:33:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:39.373 16:33:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:39.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:39.373 16:33:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:39.373 16:33:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:39.373 16:33:15 -- common/autotest_common.sh@10 -- # set +x 00:17:39.373 16:33:15 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:39.373 [2024-07-11 16:33:16.001828] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:39.373 [2024-07-11 16:33:16.002129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.373 [2024-07-11 16:33:16.158428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.631 [2024-07-11 16:33:16.390292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.889 [2024-07-11 16:33:16.560056] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.455 16:33:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:40.455 16:33:16 -- common/autotest_common.sh@852 -- # return 0 00:17:40.455 16:33:16 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:40.455 [2024-07-11 16:33:17.212053] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.455 [2024-07-11 16:33:17.212158] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.455 [2024-07-11 16:33:17.212189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.455 [2024-07-11 16:33:17.212205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.455 [2024-07-11 16:33:17.212212] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:40.455 [2024-07-11 16:33:17.212245] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.455 16:33:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.713 16:33:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.713 "name": "Existed_Raid", 00:17:40.713 "uuid": "6864cd1c-f2e0-4799-8002-98d1499a3ad3", 00:17:40.713 "strip_size_kb": 0, 00:17:40.713 "state": "configuring", 00:17:40.713 "raid_level": "raid1", 00:17:40.713 "superblock": true, 00:17:40.713 "num_base_bdevs": 3, 00:17:40.713 "num_base_bdevs_discovered": 0, 00:17:40.713 "num_base_bdevs_operational": 3, 00:17:40.713 "base_bdevs_list": [ 00:17:40.713 { 00:17:40.713 "name": "BaseBdev1", 00:17:40.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.713 "is_configured": false, 00:17:40.713 "data_offset": 0, 00:17:40.713 "data_size": 0 00:17:40.713 }, 00:17:40.713 { 00:17:40.713 "name": "BaseBdev2", 00:17:40.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.713 "is_configured": false, 00:17:40.713 "data_offset": 0, 00:17:40.713 "data_size": 0 00:17:40.713 }, 00:17:40.713 { 00:17:40.713 "name": "BaseBdev3", 00:17:40.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.713 "is_configured": false, 00:17:40.713 "data_offset": 0, 00:17:40.713 "data_size": 0 00:17:40.713 } 00:17:40.713 ] 00:17:40.713 }' 00:17:40.713 16:33:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.713 16:33:17 -- common/autotest_common.sh@10 -- # set +x 00:17:41.278 16:33:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:41.536 [2024-07-11 16:33:18.204101] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:41.536 [2024-07-11 16:33:18.204137] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:41.536 16:33:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:41.794 [2024-07-11 16:33:18.456177] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:41.794 [2024-07-11 16:33:18.456231] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:41.794 [2024-07-11 16:33:18.456259] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.794 [2024-07-11 16:33:18.456274] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.794 [2024-07-11 16:33:18.456281] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:41.794 [2024-07-11 16:33:18.456307] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:41.794 16:33:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:42.053 [2024-07-11 16:33:18.673529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.053 BaseBdev1 00:17:42.053 16:33:18 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:42.053 16:33:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:42.053 16:33:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:42.053 16:33:18 -- common/autotest_common.sh@889 -- # local i 00:17:42.053 16:33:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:42.053 16:33:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:42.053 16:33:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.311 16:33:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:42.311 [ 00:17:42.311 { 00:17:42.311 "name": "BaseBdev1", 00:17:42.311 "aliases": [ 00:17:42.311 "4becbe77-22a2-4528-b50c-3924a24eb90d" 00:17:42.311 ], 00:17:42.311 "product_name": "Malloc disk", 00:17:42.311 "block_size": 512, 00:17:42.311 "num_blocks": 65536, 00:17:42.311 "uuid": "4becbe77-22a2-4528-b50c-3924a24eb90d", 00:17:42.311 "assigned_rate_limits": { 00:17:42.311 "rw_ios_per_sec": 0, 00:17:42.311 "rw_mbytes_per_sec": 0, 00:17:42.311 "r_mbytes_per_sec": 0, 00:17:42.311 "w_mbytes_per_sec": 0 00:17:42.311 }, 00:17:42.311 "claimed": true, 00:17:42.311 "claim_type": "exclusive_write", 00:17:42.311 "zoned": false, 00:17:42.311 "supported_io_types": { 00:17:42.311 "read": true, 00:17:42.311 "write": true, 00:17:42.311 "unmap": true, 00:17:42.311 "write_zeroes": true, 00:17:42.311 "flush": true, 00:17:42.311 "reset": true, 00:17:42.311 "compare": false, 00:17:42.311 "compare_and_write": false, 00:17:42.312 "abort": true, 00:17:42.312 "nvme_admin": false, 00:17:42.312 "nvme_io": false 00:17:42.312 }, 00:17:42.312 "memory_domains": [ 00:17:42.312 { 00:17:42.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.312 "dma_device_type": 2 00:17:42.312 } 00:17:42.312 ], 00:17:42.312 "driver_specific": {} 00:17:42.312 } 00:17:42.312 ] 00:17:42.312 16:33:19 -- common/autotest_common.sh@895 -- # return 0 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.312 16:33:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.570 16:33:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.570 "name": "Existed_Raid", 00:17:42.570 "uuid": "6a1299d4-1253-412a-84fa-92f5e7e27fa3", 00:17:42.570 "strip_size_kb": 0, 00:17:42.570 "state": "configuring", 00:17:42.570 "raid_level": "raid1", 00:17:42.570 "superblock": true, 00:17:42.570 "num_base_bdevs": 3, 00:17:42.570 "num_base_bdevs_discovered": 1, 00:17:42.570 "num_base_bdevs_operational": 3, 00:17:42.570 "base_bdevs_list": [ 00:17:42.570 { 00:17:42.570 "name": "BaseBdev1", 00:17:42.570 "uuid": "4becbe77-22a2-4528-b50c-3924a24eb90d", 00:17:42.570 "is_configured": true, 00:17:42.570 "data_offset": 2048, 00:17:42.570 "data_size": 63488 00:17:42.570 }, 00:17:42.570 { 00:17:42.570 "name": "BaseBdev2", 00:17:42.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.570 "is_configured": false, 00:17:42.570 "data_offset": 0, 00:17:42.570 "data_size": 0 00:17:42.570 }, 00:17:42.570 { 00:17:42.570 "name": "BaseBdev3", 00:17:42.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.570 "is_configured": false, 00:17:42.570 "data_offset": 0, 00:17:42.570 "data_size": 0 00:17:42.570 } 00:17:42.570 ] 00:17:42.570 }' 00:17:42.570 16:33:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.570 16:33:19 -- common/autotest_common.sh@10 -- # set +x 00:17:43.136 16:33:19 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:43.394 [2024-07-11 16:33:20.178300] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:43.394 [2024-07-11 16:33:20.178473] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:43.394 16:33:20 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:43.394 16:33:20 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:43.652 16:33:20 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:43.911 BaseBdev1 00:17:43.911 16:33:20 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:43.911 16:33:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:17:43.911 16:33:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:43.911 16:33:20 -- common/autotest_common.sh@889 -- # local i 00:17:43.911 16:33:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:43.912 16:33:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:43.912 16:33:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:44.170 16:33:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:44.428 [ 00:17:44.428 { 00:17:44.428 "name": "BaseBdev1", 00:17:44.428 "aliases": [ 00:17:44.428 "1d436ae3-babc-4e74-9f13-a1830eddfeef" 00:17:44.428 ], 00:17:44.428 "product_name": "Malloc disk", 00:17:44.428 "block_size": 512, 00:17:44.428 "num_blocks": 65536, 00:17:44.428 "uuid": "1d436ae3-babc-4e74-9f13-a1830eddfeef", 00:17:44.428 "assigned_rate_limits": { 00:17:44.428 "rw_ios_per_sec": 0, 00:17:44.428 "rw_mbytes_per_sec": 0, 00:17:44.428 "r_mbytes_per_sec": 0, 00:17:44.428 "w_mbytes_per_sec": 0 00:17:44.428 }, 00:17:44.428 "claimed": false, 00:17:44.428 "zoned": false, 00:17:44.428 "supported_io_types": { 00:17:44.428 "read": true, 00:17:44.428 "write": true, 00:17:44.428 "unmap": true, 00:17:44.428 "write_zeroes": true, 00:17:44.428 "flush": true, 00:17:44.428 "reset": true, 00:17:44.428 "compare": false, 00:17:44.428 "compare_and_write": false, 00:17:44.428 "abort": true, 00:17:44.428 "nvme_admin": false, 00:17:44.428 "nvme_io": false 00:17:44.428 }, 00:17:44.428 "memory_domains": [ 00:17:44.428 { 00:17:44.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.428 "dma_device_type": 2 00:17:44.428 } 00:17:44.428 ], 00:17:44.428 "driver_specific": {} 00:17:44.428 } 00:17:44.428 ] 00:17:44.428 16:33:21 -- common/autotest_common.sh@895 -- # return 0 00:17:44.428 16:33:21 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:44.687 [2024-07-11 16:33:21.321423] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.687 [2024-07-11 16:33:21.323038] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.687 [2024-07-11 16:33:21.323207] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.687 [2024-07-11 16:33:21.323299] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:44.687 [2024-07-11 16:33:21.323436] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.687 16:33:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.977 16:33:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.977 "name": "Existed_Raid", 00:17:44.977 "uuid": "6811b0c2-fe97-4f4d-8e96-606cf04344f0", 00:17:44.977 "strip_size_kb": 0, 00:17:44.977 "state": "configuring", 00:17:44.977 "raid_level": "raid1", 00:17:44.977 "superblock": true, 00:17:44.977 "num_base_bdevs": 3, 00:17:44.977 "num_base_bdevs_discovered": 1, 00:17:44.977 "num_base_bdevs_operational": 3, 00:17:44.977 "base_bdevs_list": [ 00:17:44.977 { 00:17:44.977 "name": "BaseBdev1", 00:17:44.977 "uuid": "1d436ae3-babc-4e74-9f13-a1830eddfeef", 00:17:44.977 "is_configured": true, 00:17:44.977 "data_offset": 2048, 00:17:44.977 "data_size": 63488 00:17:44.977 }, 00:17:44.977 { 00:17:44.977 "name": "BaseBdev2", 00:17:44.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.977 "is_configured": false, 00:17:44.977 "data_offset": 0, 00:17:44.977 "data_size": 0 00:17:44.977 }, 00:17:44.977 { 00:17:44.977 "name": "BaseBdev3", 00:17:44.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.978 "is_configured": false, 00:17:44.978 "data_offset": 0, 00:17:44.978 "data_size": 0 00:17:44.978 } 00:17:44.978 ] 00:17:44.978 }' 00:17:44.978 16:33:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.978 16:33:21 -- common/autotest_common.sh@10 -- # set +x 00:17:45.544 16:33:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:45.802 [2024-07-11 16:33:22.396922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.802 BaseBdev2 00:17:45.802 16:33:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:45.802 16:33:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:17:45.802 16:33:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:45.802 16:33:22 -- common/autotest_common.sh@889 -- # local i 00:17:45.802 16:33:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:45.802 16:33:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:45.802 16:33:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.061 16:33:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.061 [ 00:17:46.061 { 00:17:46.061 "name": "BaseBdev2", 00:17:46.061 "aliases": [ 00:17:46.061 "9466f1da-5c7a-4381-bd66-76d305b28725" 00:17:46.061 ], 00:17:46.061 "product_name": "Malloc disk", 00:17:46.061 "block_size": 512, 00:17:46.061 "num_blocks": 65536, 00:17:46.061 "uuid": "9466f1da-5c7a-4381-bd66-76d305b28725", 00:17:46.061 "assigned_rate_limits": { 00:17:46.061 "rw_ios_per_sec": 0, 00:17:46.061 "rw_mbytes_per_sec": 0, 00:17:46.061 "r_mbytes_per_sec": 0, 00:17:46.061 "w_mbytes_per_sec": 0 00:17:46.061 }, 00:17:46.061 "claimed": true, 00:17:46.061 "claim_type": "exclusive_write", 00:17:46.061 "zoned": false, 00:17:46.061 "supported_io_types": { 00:17:46.061 "read": true, 00:17:46.061 "write": true, 00:17:46.061 "unmap": true, 00:17:46.061 "write_zeroes": true, 00:17:46.061 "flush": true, 00:17:46.061 "reset": true, 00:17:46.061 "compare": false, 00:17:46.061 "compare_and_write": false, 00:17:46.061 "abort": true, 00:17:46.061 "nvme_admin": false, 00:17:46.061 "nvme_io": false 00:17:46.061 }, 00:17:46.061 "memory_domains": [ 00:17:46.061 { 00:17:46.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.061 "dma_device_type": 2 00:17:46.061 } 00:17:46.061 ], 00:17:46.061 "driver_specific": {} 00:17:46.061 } 00:17:46.061 ] 00:17:46.061 16:33:22 -- common/autotest_common.sh@895 -- # return 0 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.061 16:33:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.319 16:33:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.319 "name": "Existed_Raid", 00:17:46.319 "uuid": "6811b0c2-fe97-4f4d-8e96-606cf04344f0", 00:17:46.319 "strip_size_kb": 0, 00:17:46.319 "state": "configuring", 00:17:46.319 "raid_level": "raid1", 00:17:46.319 "superblock": true, 00:17:46.319 "num_base_bdevs": 3, 00:17:46.319 "num_base_bdevs_discovered": 2, 00:17:46.319 "num_base_bdevs_operational": 3, 00:17:46.319 "base_bdevs_list": [ 00:17:46.319 { 00:17:46.319 "name": "BaseBdev1", 00:17:46.319 "uuid": "1d436ae3-babc-4e74-9f13-a1830eddfeef", 00:17:46.319 "is_configured": true, 00:17:46.319 "data_offset": 2048, 00:17:46.319 "data_size": 63488 00:17:46.319 }, 00:17:46.319 { 00:17:46.319 "name": "BaseBdev2", 00:17:46.320 "uuid": "9466f1da-5c7a-4381-bd66-76d305b28725", 00:17:46.320 "is_configured": true, 00:17:46.320 "data_offset": 2048, 00:17:46.320 "data_size": 63488 00:17:46.320 }, 00:17:46.320 { 00:17:46.320 "name": "BaseBdev3", 00:17:46.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.320 "is_configured": false, 00:17:46.320 "data_offset": 0, 00:17:46.320 "data_size": 0 00:17:46.320 } 00:17:46.320 ] 00:17:46.320 }' 00:17:46.320 16:33:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.320 16:33:23 -- common/autotest_common.sh@10 -- # set +x 00:17:46.885 16:33:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:47.143 [2024-07-11 16:33:23.918722] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.143 [2024-07-11 16:33:23.918911] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:47.143 [2024-07-11 16:33:23.918924] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:47.143 [2024-07-11 16:33:23.919064] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:47.143 [2024-07-11 16:33:23.919396] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:47.143 [2024-07-11 16:33:23.919410] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:47.143 [2024-07-11 16:33:23.919555] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.143 BaseBdev3 00:17:47.143 16:33:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:47.143 16:33:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:17:47.143 16:33:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:47.143 16:33:23 -- common/autotest_common.sh@889 -- # local i 00:17:47.143 16:33:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:47.143 16:33:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:47.143 16:33:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.402 16:33:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:47.660 [ 00:17:47.660 { 00:17:47.660 "name": "BaseBdev3", 00:17:47.660 "aliases": [ 00:17:47.660 "b7dab85a-a684-4be8-b70c-80677981b05e" 00:17:47.660 ], 00:17:47.660 "product_name": "Malloc disk", 00:17:47.660 "block_size": 512, 00:17:47.660 "num_blocks": 65536, 00:17:47.660 "uuid": "b7dab85a-a684-4be8-b70c-80677981b05e", 00:17:47.660 "assigned_rate_limits": { 00:17:47.660 "rw_ios_per_sec": 0, 00:17:47.660 "rw_mbytes_per_sec": 0, 00:17:47.660 "r_mbytes_per_sec": 0, 00:17:47.660 "w_mbytes_per_sec": 0 00:17:47.660 }, 00:17:47.660 "claimed": true, 00:17:47.660 "claim_type": "exclusive_write", 00:17:47.660 "zoned": false, 00:17:47.660 "supported_io_types": { 00:17:47.660 "read": true, 00:17:47.660 "write": true, 00:17:47.660 "unmap": true, 00:17:47.660 "write_zeroes": true, 00:17:47.660 "flush": true, 00:17:47.660 "reset": true, 00:17:47.660 "compare": false, 00:17:47.660 "compare_and_write": false, 00:17:47.660 "abort": true, 00:17:47.660 "nvme_admin": false, 00:17:47.660 "nvme_io": false 00:17:47.660 }, 00:17:47.660 "memory_domains": [ 00:17:47.660 { 00:17:47.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.660 "dma_device_type": 2 00:17:47.660 } 00:17:47.660 ], 00:17:47.660 "driver_specific": {} 00:17:47.660 } 00:17:47.660 ] 00:17:47.660 16:33:24 -- common/autotest_common.sh@895 -- # return 0 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.660 16:33:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.919 16:33:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.919 "name": "Existed_Raid", 00:17:47.919 "uuid": "6811b0c2-fe97-4f4d-8e96-606cf04344f0", 00:17:47.919 "strip_size_kb": 0, 00:17:47.919 "state": "online", 00:17:47.919 "raid_level": "raid1", 00:17:47.919 "superblock": true, 00:17:47.919 "num_base_bdevs": 3, 00:17:47.919 "num_base_bdevs_discovered": 3, 00:17:47.919 "num_base_bdevs_operational": 3, 00:17:47.919 "base_bdevs_list": [ 00:17:47.919 { 00:17:47.919 "name": "BaseBdev1", 00:17:47.919 "uuid": "1d436ae3-babc-4e74-9f13-a1830eddfeef", 00:17:47.919 "is_configured": true, 00:17:47.919 "data_offset": 2048, 00:17:47.919 "data_size": 63488 00:17:47.919 }, 00:17:47.919 { 00:17:47.919 "name": "BaseBdev2", 00:17:47.919 "uuid": "9466f1da-5c7a-4381-bd66-76d305b28725", 00:17:47.919 "is_configured": true, 00:17:47.919 "data_offset": 2048, 00:17:47.919 "data_size": 63488 00:17:47.919 }, 00:17:47.919 { 00:17:47.919 "name": "BaseBdev3", 00:17:47.919 "uuid": "b7dab85a-a684-4be8-b70c-80677981b05e", 00:17:47.919 "is_configured": true, 00:17:47.919 "data_offset": 2048, 00:17:47.919 "data_size": 63488 00:17:47.919 } 00:17:47.919 ] 00:17:47.919 }' 00:17:47.919 16:33:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.919 16:33:24 -- common/autotest_common.sh@10 -- # set +x 00:17:48.484 16:33:25 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.742 [2024-07-11 16:33:25.381381] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.742 16:33:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:48.742 16:33:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:17:48.742 16:33:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:48.742 16:33:25 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:48.742 16:33:25 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:17:48.742 16:33:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:48.742 16:33:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.742 16:33:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:48.743 16:33:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:48.743 16:33:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:48.743 16:33:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:48.743 16:33:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.743 16:33:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.743 16:33:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.743 16:33:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.743 16:33:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.743 16:33:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.001 16:33:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:49.001 "name": "Existed_Raid", 00:17:49.001 "uuid": "6811b0c2-fe97-4f4d-8e96-606cf04344f0", 00:17:49.001 "strip_size_kb": 0, 00:17:49.001 "state": "online", 00:17:49.001 "raid_level": "raid1", 00:17:49.001 "superblock": true, 00:17:49.001 "num_base_bdevs": 3, 00:17:49.001 "num_base_bdevs_discovered": 2, 00:17:49.001 "num_base_bdevs_operational": 2, 00:17:49.001 "base_bdevs_list": [ 00:17:49.001 { 00:17:49.001 "name": null, 00:17:49.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.002 "is_configured": false, 00:17:49.002 "data_offset": 2048, 00:17:49.002 "data_size": 63488 00:17:49.002 }, 00:17:49.002 { 00:17:49.002 "name": "BaseBdev2", 00:17:49.002 "uuid": "9466f1da-5c7a-4381-bd66-76d305b28725", 00:17:49.002 "is_configured": true, 00:17:49.002 "data_offset": 2048, 00:17:49.002 "data_size": 63488 00:17:49.002 }, 00:17:49.002 { 00:17:49.002 "name": "BaseBdev3", 00:17:49.002 "uuid": "b7dab85a-a684-4be8-b70c-80677981b05e", 00:17:49.002 "is_configured": true, 00:17:49.002 "data_offset": 2048, 00:17:49.002 "data_size": 63488 00:17:49.002 } 00:17:49.002 ] 00:17:49.002 }' 00:17:49.002 16:33:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:49.002 16:33:25 -- common/autotest_common.sh@10 -- # set +x 00:17:49.569 16:33:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:49.569 16:33:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.569 16:33:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.569 16:33:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:49.828 16:33:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:49.828 16:33:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.828 16:33:26 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:50.088 [2024-07-11 16:33:26.740776] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.088 16:33:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.088 16:33:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.088 16:33:26 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.088 16:33:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:50.347 16:33:27 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:50.347 16:33:27 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.347 16:33:27 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:50.606 [2024-07-11 16:33:27.279733] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.606 [2024-07-11 16:33:27.279767] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.606 [2024-07-11 16:33:27.279837] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.606 [2024-07-11 16:33:27.343107] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.606 [2024-07-11 16:33:27.343143] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:50.606 16:33:27 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.606 16:33:27 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.606 16:33:27 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.606 16:33:27 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:50.864 16:33:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:50.864 16:33:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:50.864 16:33:27 -- bdev/bdev_raid.sh@287 -- # killprocess 120317 00:17:50.864 16:33:27 -- common/autotest_common.sh@926 -- # '[' -z 120317 ']' 00:17:50.864 16:33:27 -- common/autotest_common.sh@930 -- # kill -0 120317 00:17:50.864 16:33:27 -- common/autotest_common.sh@931 -- # uname 00:17:50.864 16:33:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:50.864 16:33:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120317 00:17:50.864 killing process with pid 120317 00:17:50.864 16:33:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:50.864 16:33:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:50.864 16:33:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120317' 00:17:50.864 16:33:27 -- common/autotest_common.sh@945 -- # kill 120317 00:17:50.864 16:33:27 -- common/autotest_common.sh@950 -- # wait 120317 00:17:50.864 [2024-07-11 16:33:27.559236] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:50.864 [2024-07-11 16:33:27.559384] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.798 ************************************ 00:17:51.798 END TEST raid_state_function_test_sb 00:17:51.798 ************************************ 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:51.798 00:17:51.798 real 0m12.526s 00:17:51.798 user 0m22.330s 00:17:51.798 sys 0m1.352s 00:17:51.798 16:33:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.798 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:51.798 16:33:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:17:51.798 16:33:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:51.798 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:17:51.798 ************************************ 00:17:51.798 START TEST raid_superblock_test 00:17:51.798 ************************************ 00:17:51.798 16:33:28 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@357 -- # raid_pid=120740 00:17:51.798 16:33:28 -- bdev/bdev_raid.sh@358 -- # waitforlisten 120740 /var/tmp/spdk-raid.sock 00:17:51.799 16:33:28 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:51.799 16:33:28 -- common/autotest_common.sh@819 -- # '[' -z 120740 ']' 00:17:51.799 16:33:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:51.799 16:33:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:51.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:51.799 16:33:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:51.799 16:33:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:51.799 16:33:28 -- common/autotest_common.sh@10 -- # set +x 00:17:51.799 [2024-07-11 16:33:28.579562] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:51.799 [2024-07-11 16:33:28.579751] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120740 ] 00:17:52.057 [2024-07-11 16:33:28.734611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.315 [2024-07-11 16:33:28.927360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.315 [2024-07-11 16:33:29.095021] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.881 16:33:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:52.881 16:33:29 -- common/autotest_common.sh@852 -- # return 0 00:17:52.881 16:33:29 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:52.881 16:33:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:52.881 16:33:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:52.881 16:33:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:52.881 16:33:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:52.881 16:33:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:52.881 16:33:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:52.881 16:33:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:52.881 16:33:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:52.881 malloc1 00:17:53.139 16:33:29 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.139 [2024-07-11 16:33:29.872122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.139 [2024-07-11 16:33:29.872218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.139 [2024-07-11 16:33:29.872256] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:53.139 [2024-07-11 16:33:29.872299] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.139 [2024-07-11 16:33:29.874300] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.139 [2024-07-11 16:33:29.874345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.139 pt1 00:17:53.139 16:33:29 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:53.139 16:33:29 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:53.139 16:33:29 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:53.139 16:33:29 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:53.139 16:33:29 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:53.139 16:33:29 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.139 16:33:29 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.139 16:33:29 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.139 16:33:29 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:53.397 malloc2 00:17:53.397 16:33:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:53.655 [2024-07-11 16:33:30.298296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:53.655 [2024-07-11 16:33:30.298397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.655 [2024-07-11 16:33:30.298437] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:53.655 [2024-07-11 16:33:30.298488] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.655 [2024-07-11 16:33:30.300453] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.655 [2024-07-11 16:33:30.300513] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:53.655 pt2 00:17:53.655 16:33:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:53.655 16:33:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:53.655 16:33:30 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:53.655 16:33:30 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:53.655 16:33:30 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:53.655 16:33:30 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.655 16:33:30 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.655 16:33:30 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.655 16:33:30 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:53.913 malloc3 00:17:53.913 16:33:30 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:53.913 [2024-07-11 16:33:30.714938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:53.913 [2024-07-11 16:33:30.715021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.913 [2024-07-11 16:33:30.715057] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:53.913 [2024-07-11 16:33:30.715097] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.913 [2024-07-11 16:33:30.717036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.913 [2024-07-11 16:33:30.717112] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:53.913 pt3 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:54.171 [2024-07-11 16:33:30.902981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:54.171 [2024-07-11 16:33:30.904526] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.171 [2024-07-11 16:33:30.904592] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:54.171 [2024-07-11 16:33:30.904771] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:54.171 [2024-07-11 16:33:30.904799] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:54.171 [2024-07-11 16:33:30.904919] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:54.171 [2024-07-11 16:33:30.905280] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:54.171 [2024-07-11 16:33:30.905304] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:54.171 [2024-07-11 16:33:30.905466] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.171 16:33:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.428 16:33:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.428 "name": "raid_bdev1", 00:17:54.428 "uuid": "0d9a56e6-ae7a-4b19-885d-24611ded17e0", 00:17:54.428 "strip_size_kb": 0, 00:17:54.428 "state": "online", 00:17:54.428 "raid_level": "raid1", 00:17:54.428 "superblock": true, 00:17:54.428 "num_base_bdevs": 3, 00:17:54.428 "num_base_bdevs_discovered": 3, 00:17:54.428 "num_base_bdevs_operational": 3, 00:17:54.428 "base_bdevs_list": [ 00:17:54.428 { 00:17:54.428 "name": "pt1", 00:17:54.428 "uuid": "bc92a1e4-e5b0-56ce-931b-5021c4587ff6", 00:17:54.428 "is_configured": true, 00:17:54.428 "data_offset": 2048, 00:17:54.428 "data_size": 63488 00:17:54.428 }, 00:17:54.428 { 00:17:54.428 "name": "pt2", 00:17:54.428 "uuid": "a094f8fc-29c1-50b7-9e95-552b4274c44b", 00:17:54.428 "is_configured": true, 00:17:54.428 "data_offset": 2048, 00:17:54.428 "data_size": 63488 00:17:54.428 }, 00:17:54.428 { 00:17:54.428 "name": "pt3", 00:17:54.428 "uuid": "e0152c7b-970b-5a9f-b8df-612823a3ad89", 00:17:54.428 "is_configured": true, 00:17:54.428 "data_offset": 2048, 00:17:54.428 "data_size": 63488 00:17:54.428 } 00:17:54.428 ] 00:17:54.428 }' 00:17:54.428 16:33:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.428 16:33:31 -- common/autotest_common.sh@10 -- # set +x 00:17:54.994 16:33:31 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:54.994 16:33:31 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:55.252 [2024-07-11 16:33:31.991319] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.252 16:33:31 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0d9a56e6-ae7a-4b19-885d-24611ded17e0 00:17:55.252 16:33:31 -- bdev/bdev_raid.sh@380 -- # '[' -z 0d9a56e6-ae7a-4b19-885d-24611ded17e0 ']' 00:17:55.252 16:33:32 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:55.511 [2024-07-11 16:33:32.191164] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.511 [2024-07-11 16:33:32.191188] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.511 [2024-07-11 16:33:32.191246] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.511 [2024-07-11 16:33:32.191311] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.511 [2024-07-11 16:33:32.191322] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:55.511 16:33:32 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.511 16:33:32 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:55.783 16:33:32 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:55.784 16:33:32 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:55.784 16:33:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.784 16:33:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:55.784 16:33:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.784 16:33:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:56.044 16:33:32 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:56.044 16:33:32 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:56.302 16:33:32 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:56.302 16:33:32 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:56.560 16:33:33 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:56.560 16:33:33 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:56.560 16:33:33 -- common/autotest_common.sh@640 -- # local es=0 00:17:56.560 16:33:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:56.560 16:33:33 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.560 16:33:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.560 16:33:33 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.560 16:33:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.560 16:33:33 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.560 16:33:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.560 16:33:33 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.560 16:33:33 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:56.560 16:33:33 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:56.818 [2024-07-11 16:33:33.375352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:56.818 [2024-07-11 16:33:33.376889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:56.818 [2024-07-11 16:33:33.376985] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:56.818 [2024-07-11 16:33:33.377041] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:56.818 [2024-07-11 16:33:33.377107] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:56.818 [2024-07-11 16:33:33.377171] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:56.818 [2024-07-11 16:33:33.377215] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.818 [2024-07-11 16:33:33.377226] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:17:56.818 request: 00:17:56.818 { 00:17:56.818 "name": "raid_bdev1", 00:17:56.818 "raid_level": "raid1", 00:17:56.818 "base_bdevs": [ 00:17:56.818 "malloc1", 00:17:56.818 "malloc2", 00:17:56.818 "malloc3" 00:17:56.818 ], 00:17:56.818 "superblock": false, 00:17:56.818 "method": "bdev_raid_create", 00:17:56.818 "req_id": 1 00:17:56.818 } 00:17:56.818 Got JSON-RPC error response 00:17:56.818 response: 00:17:56.818 { 00:17:56.818 "code": -17, 00:17:56.818 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:56.818 } 00:17:56.818 16:33:33 -- common/autotest_common.sh@643 -- # es=1 00:17:56.818 16:33:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:56.818 16:33:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:56.818 16:33:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:56.818 16:33:33 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:56.818 16:33:33 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.818 16:33:33 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:56.818 16:33:33 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:56.818 16:33:33 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.076 [2024-07-11 16:33:33.759359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.076 [2024-07-11 16:33:33.759430] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.076 [2024-07-11 16:33:33.759463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:57.076 [2024-07-11 16:33:33.759481] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.076 [2024-07-11 16:33:33.761345] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.076 [2024-07-11 16:33:33.761388] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.076 [2024-07-11 16:33:33.761513] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:57.076 [2024-07-11 16:33:33.761565] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.076 pt1 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.076 16:33:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.333 16:33:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.333 "name": "raid_bdev1", 00:17:57.333 "uuid": "0d9a56e6-ae7a-4b19-885d-24611ded17e0", 00:17:57.333 "strip_size_kb": 0, 00:17:57.333 "state": "configuring", 00:17:57.333 "raid_level": "raid1", 00:17:57.333 "superblock": true, 00:17:57.333 "num_base_bdevs": 3, 00:17:57.333 "num_base_bdevs_discovered": 1, 00:17:57.333 "num_base_bdevs_operational": 3, 00:17:57.333 "base_bdevs_list": [ 00:17:57.333 { 00:17:57.333 "name": "pt1", 00:17:57.333 "uuid": "bc92a1e4-e5b0-56ce-931b-5021c4587ff6", 00:17:57.333 "is_configured": true, 00:17:57.333 "data_offset": 2048, 00:17:57.333 "data_size": 63488 00:17:57.333 }, 00:17:57.333 { 00:17:57.333 "name": null, 00:17:57.333 "uuid": "a094f8fc-29c1-50b7-9e95-552b4274c44b", 00:17:57.333 "is_configured": false, 00:17:57.333 "data_offset": 2048, 00:17:57.333 "data_size": 63488 00:17:57.333 }, 00:17:57.333 { 00:17:57.333 "name": null, 00:17:57.333 "uuid": "e0152c7b-970b-5a9f-b8df-612823a3ad89", 00:17:57.333 "is_configured": false, 00:17:57.333 "data_offset": 2048, 00:17:57.333 "data_size": 63488 00:17:57.333 } 00:17:57.333 ] 00:17:57.333 }' 00:17:57.333 16:33:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.333 16:33:33 -- common/autotest_common.sh@10 -- # set +x 00:17:57.906 16:33:34 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:57.906 16:33:34 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.171 [2024-07-11 16:33:34.759550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.171 [2024-07-11 16:33:34.759628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.171 [2024-07-11 16:33:34.759666] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:58.171 [2024-07-11 16:33:34.759686] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.171 [2024-07-11 16:33:34.760104] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.171 [2024-07-11 16:33:34.760132] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.171 [2024-07-11 16:33:34.760230] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:58.171 [2024-07-11 16:33:34.760256] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.171 pt2 00:17:58.171 16:33:34 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:58.428 [2024-07-11 16:33:35.015667] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.428 16:33:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.685 16:33:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.685 "name": "raid_bdev1", 00:17:58.685 "uuid": "0d9a56e6-ae7a-4b19-885d-24611ded17e0", 00:17:58.685 "strip_size_kb": 0, 00:17:58.685 "state": "configuring", 00:17:58.685 "raid_level": "raid1", 00:17:58.685 "superblock": true, 00:17:58.685 "num_base_bdevs": 3, 00:17:58.685 "num_base_bdevs_discovered": 1, 00:17:58.685 "num_base_bdevs_operational": 3, 00:17:58.685 "base_bdevs_list": [ 00:17:58.685 { 00:17:58.685 "name": "pt1", 00:17:58.685 "uuid": "bc92a1e4-e5b0-56ce-931b-5021c4587ff6", 00:17:58.685 "is_configured": true, 00:17:58.685 "data_offset": 2048, 00:17:58.685 "data_size": 63488 00:17:58.685 }, 00:17:58.685 { 00:17:58.685 "name": null, 00:17:58.685 "uuid": "a094f8fc-29c1-50b7-9e95-552b4274c44b", 00:17:58.685 "is_configured": false, 00:17:58.685 "data_offset": 2048, 00:17:58.685 "data_size": 63488 00:17:58.685 }, 00:17:58.685 { 00:17:58.685 "name": null, 00:17:58.685 "uuid": "e0152c7b-970b-5a9f-b8df-612823a3ad89", 00:17:58.685 "is_configured": false, 00:17:58.685 "data_offset": 2048, 00:17:58.685 "data_size": 63488 00:17:58.685 } 00:17:58.685 ] 00:17:58.685 }' 00:17:58.685 16:33:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.685 16:33:35 -- common/autotest_common.sh@10 -- # set +x 00:17:59.250 16:33:35 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:59.250 16:33:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:59.250 16:33:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:59.508 [2024-07-11 16:33:36.083895] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:59.508 [2024-07-11 16:33:36.083979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.508 [2024-07-11 16:33:36.084020] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:59.508 [2024-07-11 16:33:36.084044] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.508 [2024-07-11 16:33:36.084739] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.508 [2024-07-11 16:33:36.084798] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:59.508 [2024-07-11 16:33:36.084897] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:59.508 [2024-07-11 16:33:36.084922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.508 pt2 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:59.508 [2024-07-11 16:33:36.288023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:59.508 [2024-07-11 16:33:36.288123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:59.508 [2024-07-11 16:33:36.288158] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:59.508 [2024-07-11 16:33:36.288183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:59.508 [2024-07-11 16:33:36.288861] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:59.508 [2024-07-11 16:33:36.288926] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:59.508 [2024-07-11 16:33:36.289060] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:59.508 [2024-07-11 16:33:36.289088] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:59.508 [2024-07-11 16:33:36.289500] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:59.508 [2024-07-11 16:33:36.289523] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:59.508 [2024-07-11 16:33:36.289633] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:59.508 [2024-07-11 16:33:36.290188] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:59.508 [2024-07-11 16:33:36.290210] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:59.508 [2024-07-11 16:33:36.290350] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.508 pt3 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.508 16:33:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.766 16:33:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.766 "name": "raid_bdev1", 00:17:59.766 "uuid": "0d9a56e6-ae7a-4b19-885d-24611ded17e0", 00:17:59.766 "strip_size_kb": 0, 00:17:59.766 "state": "online", 00:17:59.766 "raid_level": "raid1", 00:17:59.766 "superblock": true, 00:17:59.766 "num_base_bdevs": 3, 00:17:59.766 "num_base_bdevs_discovered": 3, 00:17:59.766 "num_base_bdevs_operational": 3, 00:17:59.766 "base_bdevs_list": [ 00:17:59.766 { 00:17:59.766 "name": "pt1", 00:17:59.766 "uuid": "bc92a1e4-e5b0-56ce-931b-5021c4587ff6", 00:17:59.766 "is_configured": true, 00:17:59.766 "data_offset": 2048, 00:17:59.766 "data_size": 63488 00:17:59.766 }, 00:17:59.766 { 00:17:59.766 "name": "pt2", 00:17:59.766 "uuid": "a094f8fc-29c1-50b7-9e95-552b4274c44b", 00:17:59.766 "is_configured": true, 00:17:59.766 "data_offset": 2048, 00:17:59.766 "data_size": 63488 00:17:59.766 }, 00:17:59.766 { 00:17:59.766 "name": "pt3", 00:17:59.766 "uuid": "e0152c7b-970b-5a9f-b8df-612823a3ad89", 00:17:59.766 "is_configured": true, 00:17:59.766 "data_offset": 2048, 00:17:59.766 "data_size": 63488 00:17:59.766 } 00:17:59.766 ] 00:17:59.766 }' 00:17:59.766 16:33:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.766 16:33:36 -- common/autotest_common.sh@10 -- # set +x 00:18:00.702 16:33:37 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:00.702 16:33:37 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:00.702 [2024-07-11 16:33:37.364545] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.702 16:33:37 -- bdev/bdev_raid.sh@430 -- # '[' 0d9a56e6-ae7a-4b19-885d-24611ded17e0 '!=' 0d9a56e6-ae7a-4b19-885d-24611ded17e0 ']' 00:18:00.702 16:33:37 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:00.702 16:33:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:00.702 16:33:37 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:00.702 16:33:37 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:00.961 [2024-07-11 16:33:37.620334] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.961 16:33:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.219 16:33:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:01.219 "name": "raid_bdev1", 00:18:01.219 "uuid": "0d9a56e6-ae7a-4b19-885d-24611ded17e0", 00:18:01.219 "strip_size_kb": 0, 00:18:01.219 "state": "online", 00:18:01.219 "raid_level": "raid1", 00:18:01.219 "superblock": true, 00:18:01.219 "num_base_bdevs": 3, 00:18:01.219 "num_base_bdevs_discovered": 2, 00:18:01.219 "num_base_bdevs_operational": 2, 00:18:01.219 "base_bdevs_list": [ 00:18:01.219 { 00:18:01.219 "name": null, 00:18:01.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.219 "is_configured": false, 00:18:01.219 "data_offset": 2048, 00:18:01.219 "data_size": 63488 00:18:01.219 }, 00:18:01.219 { 00:18:01.219 "name": "pt2", 00:18:01.219 "uuid": "a094f8fc-29c1-50b7-9e95-552b4274c44b", 00:18:01.219 "is_configured": true, 00:18:01.219 "data_offset": 2048, 00:18:01.219 "data_size": 63488 00:18:01.219 }, 00:18:01.219 { 00:18:01.219 "name": "pt3", 00:18:01.219 "uuid": "e0152c7b-970b-5a9f-b8df-612823a3ad89", 00:18:01.219 "is_configured": true, 00:18:01.219 "data_offset": 2048, 00:18:01.219 "data_size": 63488 00:18:01.219 } 00:18:01.219 ] 00:18:01.219 }' 00:18:01.219 16:33:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:01.219 16:33:37 -- common/autotest_common.sh@10 -- # set +x 00:18:01.785 16:33:38 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:02.043 [2024-07-11 16:33:38.736512] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.043 [2024-07-11 16:33:38.736540] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:02.043 [2024-07-11 16:33:38.736625] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.043 [2024-07-11 16:33:38.736682] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.043 [2024-07-11 16:33:38.736693] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:18:02.043 16:33:38 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.043 16:33:38 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:02.301 16:33:38 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:02.301 16:33:38 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:02.301 16:33:38 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:02.301 16:33:38 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:02.301 16:33:38 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:02.560 16:33:39 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:02.560 16:33:39 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:02.560 16:33:39 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:02.819 16:33:39 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:02.819 16:33:39 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:02.819 16:33:39 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:02.819 16:33:39 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:02.819 16:33:39 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.077 [2024-07-11 16:33:39.628699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.077 [2024-07-11 16:33:39.628779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.077 [2024-07-11 16:33:39.628816] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:03.077 [2024-07-11 16:33:39.628841] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.077 [2024-07-11 16:33:39.631217] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.077 [2024-07-11 16:33:39.631281] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.077 [2024-07-11 16:33:39.631407] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:03.077 [2024-07-11 16:33:39.631478] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:03.077 pt2 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.077 16:33:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:03.077 "name": "raid_bdev1", 00:18:03.077 "uuid": "0d9a56e6-ae7a-4b19-885d-24611ded17e0", 00:18:03.077 "strip_size_kb": 0, 00:18:03.077 "state": "configuring", 00:18:03.077 "raid_level": "raid1", 00:18:03.077 "superblock": true, 00:18:03.077 "num_base_bdevs": 3, 00:18:03.077 "num_base_bdevs_discovered": 1, 00:18:03.077 "num_base_bdevs_operational": 2, 00:18:03.077 "base_bdevs_list": [ 00:18:03.077 { 00:18:03.077 "name": null, 00:18:03.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.077 "is_configured": false, 00:18:03.077 "data_offset": 2048, 00:18:03.077 "data_size": 63488 00:18:03.077 }, 00:18:03.077 { 00:18:03.077 "name": "pt2", 00:18:03.077 "uuid": "a094f8fc-29c1-50b7-9e95-552b4274c44b", 00:18:03.077 "is_configured": true, 00:18:03.077 "data_offset": 2048, 00:18:03.077 "data_size": 63488 00:18:03.077 }, 00:18:03.077 { 00:18:03.077 "name": null, 00:18:03.078 "uuid": "e0152c7b-970b-5a9f-b8df-612823a3ad89", 00:18:03.078 "is_configured": false, 00:18:03.078 "data_offset": 2048, 00:18:03.078 "data_size": 63488 00:18:03.078 } 00:18:03.078 ] 00:18:03.078 }' 00:18:03.078 16:33:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:03.078 16:33:39 -- common/autotest_common.sh@10 -- # set +x 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@462 -- # i=2 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:04.039 [2024-07-11 16:33:40.668898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:04.039 [2024-07-11 16:33:40.669013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.039 [2024-07-11 16:33:40.669056] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:04.039 [2024-07-11 16:33:40.669081] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.039 [2024-07-11 16:33:40.669602] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.039 [2024-07-11 16:33:40.669641] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:04.039 [2024-07-11 16:33:40.669796] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:04.039 [2024-07-11 16:33:40.669831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:04.039 [2024-07-11 16:33:40.669946] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:18:04.039 [2024-07-11 16:33:40.669959] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:04.039 [2024-07-11 16:33:40.670064] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:04.039 [2024-07-11 16:33:40.670382] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:18:04.039 [2024-07-11 16:33:40.670404] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:18:04.039 [2024-07-11 16:33:40.670531] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.039 pt3 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:04.039 16:33:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.040 16:33:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.297 16:33:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:04.297 "name": "raid_bdev1", 00:18:04.297 "uuid": "0d9a56e6-ae7a-4b19-885d-24611ded17e0", 00:18:04.297 "strip_size_kb": 0, 00:18:04.297 "state": "online", 00:18:04.297 "raid_level": "raid1", 00:18:04.297 "superblock": true, 00:18:04.297 "num_base_bdevs": 3, 00:18:04.297 "num_base_bdevs_discovered": 2, 00:18:04.297 "num_base_bdevs_operational": 2, 00:18:04.297 "base_bdevs_list": [ 00:18:04.297 { 00:18:04.297 "name": null, 00:18:04.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.297 "is_configured": false, 00:18:04.297 "data_offset": 2048, 00:18:04.297 "data_size": 63488 00:18:04.297 }, 00:18:04.297 { 00:18:04.297 "name": "pt2", 00:18:04.297 "uuid": "a094f8fc-29c1-50b7-9e95-552b4274c44b", 00:18:04.297 "is_configured": true, 00:18:04.297 "data_offset": 2048, 00:18:04.297 "data_size": 63488 00:18:04.297 }, 00:18:04.297 { 00:18:04.297 "name": "pt3", 00:18:04.297 "uuid": "e0152c7b-970b-5a9f-b8df-612823a3ad89", 00:18:04.297 "is_configured": true, 00:18:04.297 "data_offset": 2048, 00:18:04.297 "data_size": 63488 00:18:04.297 } 00:18:04.297 ] 00:18:04.297 }' 00:18:04.297 16:33:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:04.297 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:18:04.861 16:33:41 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:18:04.861 16:33:41 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:04.861 [2024-07-11 16:33:41.637392] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.861 [2024-07-11 16:33:41.637423] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.861 [2024-07-11 16:33:41.637498] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.861 [2024-07-11 16:33:41.637561] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.861 [2024-07-11 16:33:41.637573] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:18:04.861 16:33:41 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.861 16:33:41 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:05.120 16:33:41 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:05.120 16:33:41 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:05.120 16:33:41 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:05.378 [2024-07-11 16:33:41.985522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:05.378 [2024-07-11 16:33:41.985939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.378 [2024-07-11 16:33:41.986127] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:05.378 [2024-07-11 16:33:41.986264] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.378 [2024-07-11 16:33:41.988445] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.378 [2024-07-11 16:33:41.988599] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:05.378 [2024-07-11 16:33:41.988825] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:05.378 [2024-07-11 16:33:41.988887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:05.378 pt1 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.378 16:33:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.636 16:33:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.636 "name": "raid_bdev1", 00:18:05.636 "uuid": "0d9a56e6-ae7a-4b19-885d-24611ded17e0", 00:18:05.636 "strip_size_kb": 0, 00:18:05.636 "state": "configuring", 00:18:05.636 "raid_level": "raid1", 00:18:05.636 "superblock": true, 00:18:05.636 "num_base_bdevs": 3, 00:18:05.636 "num_base_bdevs_discovered": 1, 00:18:05.636 "num_base_bdevs_operational": 3, 00:18:05.636 "base_bdevs_list": [ 00:18:05.636 { 00:18:05.636 "name": "pt1", 00:18:05.636 "uuid": "bc92a1e4-e5b0-56ce-931b-5021c4587ff6", 00:18:05.636 "is_configured": true, 00:18:05.636 "data_offset": 2048, 00:18:05.636 "data_size": 63488 00:18:05.636 }, 00:18:05.636 { 00:18:05.636 "name": null, 00:18:05.636 "uuid": "a094f8fc-29c1-50b7-9e95-552b4274c44b", 00:18:05.636 "is_configured": false, 00:18:05.636 "data_offset": 2048, 00:18:05.636 "data_size": 63488 00:18:05.636 }, 00:18:05.636 { 00:18:05.636 "name": null, 00:18:05.636 "uuid": "e0152c7b-970b-5a9f-b8df-612823a3ad89", 00:18:05.636 "is_configured": false, 00:18:05.636 "data_offset": 2048, 00:18:05.636 "data_size": 63488 00:18:05.636 } 00:18:05.636 ] 00:18:05.636 }' 00:18:05.636 16:33:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.636 16:33:42 -- common/autotest_common.sh@10 -- # set +x 00:18:06.202 16:33:42 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:06.202 16:33:42 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:06.202 16:33:42 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:06.461 16:33:43 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:06.461 16:33:43 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:06.461 16:33:43 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@489 -- # i=2 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:06.720 [2024-07-11 16:33:43.493807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:06.720 [2024-07-11 16:33:43.494255] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.720 [2024-07-11 16:33:43.494399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:06.720 [2024-07-11 16:33:43.494536] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.720 [2024-07-11 16:33:43.495066] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.720 [2024-07-11 16:33:43.495224] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:06.720 [2024-07-11 16:33:43.495439] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:06.720 [2024-07-11 16:33:43.495466] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:06.720 [2024-07-11 16:33:43.495475] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:06.720 [2024-07-11 16:33:43.495492] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:18:06.720 [2024-07-11 16:33:43.495564] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:06.720 pt3 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.720 16:33:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.979 16:33:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:06.979 "name": "raid_bdev1", 00:18:06.979 "uuid": "0d9a56e6-ae7a-4b19-885d-24611ded17e0", 00:18:06.979 "strip_size_kb": 0, 00:18:06.979 "state": "configuring", 00:18:06.979 "raid_level": "raid1", 00:18:06.979 "superblock": true, 00:18:06.979 "num_base_bdevs": 3, 00:18:06.979 "num_base_bdevs_discovered": 1, 00:18:06.979 "num_base_bdevs_operational": 2, 00:18:06.979 "base_bdevs_list": [ 00:18:06.979 { 00:18:06.979 "name": null, 00:18:06.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.979 "is_configured": false, 00:18:06.979 "data_offset": 2048, 00:18:06.979 "data_size": 63488 00:18:06.979 }, 00:18:06.979 { 00:18:06.979 "name": null, 00:18:06.979 "uuid": "a094f8fc-29c1-50b7-9e95-552b4274c44b", 00:18:06.979 "is_configured": false, 00:18:06.979 "data_offset": 2048, 00:18:06.979 "data_size": 63488 00:18:06.979 }, 00:18:06.979 { 00:18:06.979 "name": "pt3", 00:18:06.979 "uuid": "e0152c7b-970b-5a9f-b8df-612823a3ad89", 00:18:06.979 "is_configured": true, 00:18:06.979 "data_offset": 2048, 00:18:06.979 "data_size": 63488 00:18:06.979 } 00:18:06.979 ] 00:18:06.979 }' 00:18:06.979 16:33:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:06.979 16:33:43 -- common/autotest_common.sh@10 -- # set +x 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:07.919 [2024-07-11 16:33:44.561335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:07.919 [2024-07-11 16:33:44.561426] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.919 [2024-07-11 16:33:44.561469] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:07.919 [2024-07-11 16:33:44.561502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.919 [2024-07-11 16:33:44.562033] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.919 [2024-07-11 16:33:44.562097] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:07.919 [2024-07-11 16:33:44.562225] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:07.919 [2024-07-11 16:33:44.562251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.919 [2024-07-11 16:33:44.562421] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:18:07.919 [2024-07-11 16:33:44.562445] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:07.919 [2024-07-11 16:33:44.562566] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:07.919 [2024-07-11 16:33:44.562893] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:18:07.919 [2024-07-11 16:33:44.562932] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:18:07.919 [2024-07-11 16:33:44.563107] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.919 pt2 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.919 16:33:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.177 16:33:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.177 "name": "raid_bdev1", 00:18:08.177 "uuid": "0d9a56e6-ae7a-4b19-885d-24611ded17e0", 00:18:08.177 "strip_size_kb": 0, 00:18:08.177 "state": "online", 00:18:08.177 "raid_level": "raid1", 00:18:08.177 "superblock": true, 00:18:08.177 "num_base_bdevs": 3, 00:18:08.177 "num_base_bdevs_discovered": 2, 00:18:08.177 "num_base_bdevs_operational": 2, 00:18:08.177 "base_bdevs_list": [ 00:18:08.177 { 00:18:08.177 "name": null, 00:18:08.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.177 "is_configured": false, 00:18:08.177 "data_offset": 2048, 00:18:08.177 "data_size": 63488 00:18:08.177 }, 00:18:08.177 { 00:18:08.177 "name": "pt2", 00:18:08.177 "uuid": "a094f8fc-29c1-50b7-9e95-552b4274c44b", 00:18:08.177 "is_configured": true, 00:18:08.177 "data_offset": 2048, 00:18:08.177 "data_size": 63488 00:18:08.177 }, 00:18:08.177 { 00:18:08.177 "name": "pt3", 00:18:08.177 "uuid": "e0152c7b-970b-5a9f-b8df-612823a3ad89", 00:18:08.177 "is_configured": true, 00:18:08.177 "data_offset": 2048, 00:18:08.177 "data_size": 63488 00:18:08.177 } 00:18:08.177 ] 00:18:08.177 }' 00:18:08.177 16:33:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.177 16:33:44 -- common/autotest_common.sh@10 -- # set +x 00:18:08.743 16:33:45 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:08.743 16:33:45 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:09.001 [2024-07-11 16:33:45.621870] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:09.001 16:33:45 -- bdev/bdev_raid.sh@506 -- # '[' 0d9a56e6-ae7a-4b19-885d-24611ded17e0 '!=' 0d9a56e6-ae7a-4b19-885d-24611ded17e0 ']' 00:18:09.001 16:33:45 -- bdev/bdev_raid.sh@511 -- # killprocess 120740 00:18:09.001 16:33:45 -- common/autotest_common.sh@926 -- # '[' -z 120740 ']' 00:18:09.001 16:33:45 -- common/autotest_common.sh@930 -- # kill -0 120740 00:18:09.001 16:33:45 -- common/autotest_common.sh@931 -- # uname 00:18:09.001 16:33:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:09.001 16:33:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 120740 00:18:09.001 killing process with pid 120740 00:18:09.001 16:33:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:09.001 16:33:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:09.001 16:33:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 120740' 00:18:09.001 16:33:45 -- common/autotest_common.sh@945 -- # kill 120740 00:18:09.001 16:33:45 -- common/autotest_common.sh@950 -- # wait 120740 00:18:09.001 [2024-07-11 16:33:45.654405] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:09.001 [2024-07-11 16:33:45.654476] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.001 [2024-07-11 16:33:45.654587] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.001 [2024-07-11 16:33:45.654608] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:18:09.260 [2024-07-11 16:33:45.843209] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:10.195 ************************************ 00:18:10.195 END TEST raid_superblock_test 00:18:10.195 ************************************ 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:10.195 00:18:10.195 real 0m18.230s 00:18:10.195 user 0m33.785s 00:18:10.195 sys 0m1.980s 00:18:10.195 16:33:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:10.195 16:33:46 -- common/autotest_common.sh@10 -- # set +x 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:18:10.195 16:33:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:10.195 16:33:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:10.195 16:33:46 -- common/autotest_common.sh@10 -- # set +x 00:18:10.195 ************************************ 00:18:10.195 START TEST raid_state_function_test 00:18:10.195 ************************************ 00:18:10.195 16:33:46 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=121372 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:10.195 Process raid pid: 121372 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121372' 00:18:10.195 16:33:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121372 /var/tmp/spdk-raid.sock 00:18:10.195 16:33:46 -- common/autotest_common.sh@819 -- # '[' -z 121372 ']' 00:18:10.195 16:33:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:10.195 16:33:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:10.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:10.195 16:33:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:10.195 16:33:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:10.195 16:33:46 -- common/autotest_common.sh@10 -- # set +x 00:18:10.195 [2024-07-11 16:33:46.866993] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:10.195 [2024-07-11 16:33:46.867173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.453 [2024-07-11 16:33:47.046064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.712 [2024-07-11 16:33:47.275976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.712 [2024-07-11 16:33:47.445469] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.290 16:33:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:11.290 16:33:47 -- common/autotest_common.sh@852 -- # return 0 00:18:11.290 16:33:47 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:11.290 [2024-07-11 16:33:48.023239] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.290 [2024-07-11 16:33:48.023339] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.290 [2024-07-11 16:33:48.023352] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:11.290 [2024-07-11 16:33:48.023392] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:11.290 [2024-07-11 16:33:48.023399] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:11.290 [2024-07-11 16:33:48.023433] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:11.290 [2024-07-11 16:33:48.023441] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:11.290 [2024-07-11 16:33:48.023461] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.290 16:33:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.592 16:33:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.592 "name": "Existed_Raid", 00:18:11.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.592 "strip_size_kb": 64, 00:18:11.592 "state": "configuring", 00:18:11.592 "raid_level": "raid0", 00:18:11.592 "superblock": false, 00:18:11.592 "num_base_bdevs": 4, 00:18:11.592 "num_base_bdevs_discovered": 0, 00:18:11.592 "num_base_bdevs_operational": 4, 00:18:11.592 "base_bdevs_list": [ 00:18:11.592 { 00:18:11.592 "name": "BaseBdev1", 00:18:11.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.592 "is_configured": false, 00:18:11.592 "data_offset": 0, 00:18:11.592 "data_size": 0 00:18:11.592 }, 00:18:11.592 { 00:18:11.592 "name": "BaseBdev2", 00:18:11.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.592 "is_configured": false, 00:18:11.592 "data_offset": 0, 00:18:11.592 "data_size": 0 00:18:11.592 }, 00:18:11.592 { 00:18:11.592 "name": "BaseBdev3", 00:18:11.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.592 "is_configured": false, 00:18:11.592 "data_offset": 0, 00:18:11.592 "data_size": 0 00:18:11.592 }, 00:18:11.592 { 00:18:11.592 "name": "BaseBdev4", 00:18:11.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.592 "is_configured": false, 00:18:11.592 "data_offset": 0, 00:18:11.592 "data_size": 0 00:18:11.592 } 00:18:11.592 ] 00:18:11.592 }' 00:18:11.593 16:33:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.593 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:18:12.167 16:33:48 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:12.425 [2024-07-11 16:33:49.051294] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:12.425 [2024-07-11 16:33:49.051341] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:12.425 16:33:49 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:12.684 [2024-07-11 16:33:49.235373] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:12.684 [2024-07-11 16:33:49.235421] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:12.684 [2024-07-11 16:33:49.235447] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:12.684 [2024-07-11 16:33:49.235475] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:12.684 [2024-07-11 16:33:49.235483] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:12.684 [2024-07-11 16:33:49.235511] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:12.684 [2024-07-11 16:33:49.235518] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:12.684 [2024-07-11 16:33:49.235544] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:12.684 16:33:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:12.684 [2024-07-11 16:33:49.444418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.684 BaseBdev1 00:18:12.684 16:33:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:12.684 16:33:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:12.684 16:33:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:12.684 16:33:49 -- common/autotest_common.sh@889 -- # local i 00:18:12.684 16:33:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:12.684 16:33:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:12.684 16:33:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:12.942 16:33:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:13.201 [ 00:18:13.201 { 00:18:13.201 "name": "BaseBdev1", 00:18:13.201 "aliases": [ 00:18:13.201 "7c819479-a3ba-44fb-aa6e-d1281c630741" 00:18:13.201 ], 00:18:13.201 "product_name": "Malloc disk", 00:18:13.201 "block_size": 512, 00:18:13.201 "num_blocks": 65536, 00:18:13.201 "uuid": "7c819479-a3ba-44fb-aa6e-d1281c630741", 00:18:13.201 "assigned_rate_limits": { 00:18:13.201 "rw_ios_per_sec": 0, 00:18:13.201 "rw_mbytes_per_sec": 0, 00:18:13.201 "r_mbytes_per_sec": 0, 00:18:13.201 "w_mbytes_per_sec": 0 00:18:13.201 }, 00:18:13.201 "claimed": true, 00:18:13.201 "claim_type": "exclusive_write", 00:18:13.201 "zoned": false, 00:18:13.201 "supported_io_types": { 00:18:13.201 "read": true, 00:18:13.201 "write": true, 00:18:13.201 "unmap": true, 00:18:13.201 "write_zeroes": true, 00:18:13.201 "flush": true, 00:18:13.201 "reset": true, 00:18:13.201 "compare": false, 00:18:13.201 "compare_and_write": false, 00:18:13.201 "abort": true, 00:18:13.201 "nvme_admin": false, 00:18:13.201 "nvme_io": false 00:18:13.201 }, 00:18:13.201 "memory_domains": [ 00:18:13.201 { 00:18:13.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.201 "dma_device_type": 2 00:18:13.201 } 00:18:13.201 ], 00:18:13.201 "driver_specific": {} 00:18:13.201 } 00:18:13.201 ] 00:18:13.201 16:33:49 -- common/autotest_common.sh@895 -- # return 0 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.201 16:33:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.460 16:33:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:13.460 "name": "Existed_Raid", 00:18:13.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.460 "strip_size_kb": 64, 00:18:13.460 "state": "configuring", 00:18:13.460 "raid_level": "raid0", 00:18:13.460 "superblock": false, 00:18:13.460 "num_base_bdevs": 4, 00:18:13.460 "num_base_bdevs_discovered": 1, 00:18:13.460 "num_base_bdevs_operational": 4, 00:18:13.460 "base_bdevs_list": [ 00:18:13.460 { 00:18:13.460 "name": "BaseBdev1", 00:18:13.460 "uuid": "7c819479-a3ba-44fb-aa6e-d1281c630741", 00:18:13.460 "is_configured": true, 00:18:13.460 "data_offset": 0, 00:18:13.460 "data_size": 65536 00:18:13.460 }, 00:18:13.460 { 00:18:13.460 "name": "BaseBdev2", 00:18:13.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.460 "is_configured": false, 00:18:13.460 "data_offset": 0, 00:18:13.460 "data_size": 0 00:18:13.460 }, 00:18:13.460 { 00:18:13.460 "name": "BaseBdev3", 00:18:13.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.460 "is_configured": false, 00:18:13.460 "data_offset": 0, 00:18:13.460 "data_size": 0 00:18:13.460 }, 00:18:13.460 { 00:18:13.460 "name": "BaseBdev4", 00:18:13.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.460 "is_configured": false, 00:18:13.460 "data_offset": 0, 00:18:13.460 "data_size": 0 00:18:13.460 } 00:18:13.460 ] 00:18:13.460 }' 00:18:13.460 16:33:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:13.460 16:33:50 -- common/autotest_common.sh@10 -- # set +x 00:18:14.027 16:33:50 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:14.285 [2024-07-11 16:33:50.996734] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:14.285 [2024-07-11 16:33:50.996785] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:14.285 16:33:51 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:14.285 16:33:51 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:14.544 [2024-07-11 16:33:51.164795] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.544 [2024-07-11 16:33:51.166566] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.544 [2024-07-11 16:33:51.166638] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.544 [2024-07-11 16:33:51.166665] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.544 [2024-07-11 16:33:51.166687] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.544 [2024-07-11 16:33:51.166695] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:14.544 [2024-07-11 16:33:51.166709] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.544 16:33:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.805 16:33:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.805 "name": "Existed_Raid", 00:18:14.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.805 "strip_size_kb": 64, 00:18:14.805 "state": "configuring", 00:18:14.805 "raid_level": "raid0", 00:18:14.805 "superblock": false, 00:18:14.805 "num_base_bdevs": 4, 00:18:14.805 "num_base_bdevs_discovered": 1, 00:18:14.805 "num_base_bdevs_operational": 4, 00:18:14.805 "base_bdevs_list": [ 00:18:14.805 { 00:18:14.805 "name": "BaseBdev1", 00:18:14.805 "uuid": "7c819479-a3ba-44fb-aa6e-d1281c630741", 00:18:14.805 "is_configured": true, 00:18:14.805 "data_offset": 0, 00:18:14.805 "data_size": 65536 00:18:14.805 }, 00:18:14.805 { 00:18:14.805 "name": "BaseBdev2", 00:18:14.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.805 "is_configured": false, 00:18:14.805 "data_offset": 0, 00:18:14.805 "data_size": 0 00:18:14.805 }, 00:18:14.805 { 00:18:14.805 "name": "BaseBdev3", 00:18:14.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.805 "is_configured": false, 00:18:14.805 "data_offset": 0, 00:18:14.805 "data_size": 0 00:18:14.805 }, 00:18:14.805 { 00:18:14.805 "name": "BaseBdev4", 00:18:14.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.805 "is_configured": false, 00:18:14.805 "data_offset": 0, 00:18:14.805 "data_size": 0 00:18:14.805 } 00:18:14.805 ] 00:18:14.805 }' 00:18:14.805 16:33:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.805 16:33:51 -- common/autotest_common.sh@10 -- # set +x 00:18:15.371 16:33:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:15.629 [2024-07-11 16:33:52.301393] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.629 BaseBdev2 00:18:15.629 16:33:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:15.629 16:33:52 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:15.629 16:33:52 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:15.629 16:33:52 -- common/autotest_common.sh@889 -- # local i 00:18:15.629 16:33:52 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:15.629 16:33:52 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:15.629 16:33:52 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:15.887 16:33:52 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:16.145 [ 00:18:16.145 { 00:18:16.145 "name": "BaseBdev2", 00:18:16.145 "aliases": [ 00:18:16.145 "9897ab98-6475-4da1-9e99-06514b829096" 00:18:16.145 ], 00:18:16.145 "product_name": "Malloc disk", 00:18:16.145 "block_size": 512, 00:18:16.145 "num_blocks": 65536, 00:18:16.145 "uuid": "9897ab98-6475-4da1-9e99-06514b829096", 00:18:16.145 "assigned_rate_limits": { 00:18:16.145 "rw_ios_per_sec": 0, 00:18:16.145 "rw_mbytes_per_sec": 0, 00:18:16.145 "r_mbytes_per_sec": 0, 00:18:16.145 "w_mbytes_per_sec": 0 00:18:16.145 }, 00:18:16.145 "claimed": true, 00:18:16.145 "claim_type": "exclusive_write", 00:18:16.145 "zoned": false, 00:18:16.145 "supported_io_types": { 00:18:16.145 "read": true, 00:18:16.145 "write": true, 00:18:16.145 "unmap": true, 00:18:16.145 "write_zeroes": true, 00:18:16.145 "flush": true, 00:18:16.145 "reset": true, 00:18:16.145 "compare": false, 00:18:16.145 "compare_and_write": false, 00:18:16.145 "abort": true, 00:18:16.145 "nvme_admin": false, 00:18:16.145 "nvme_io": false 00:18:16.145 }, 00:18:16.145 "memory_domains": [ 00:18:16.145 { 00:18:16.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:16.145 "dma_device_type": 2 00:18:16.145 } 00:18:16.145 ], 00:18:16.145 "driver_specific": {} 00:18:16.145 } 00:18:16.145 ] 00:18:16.145 16:33:52 -- common/autotest_common.sh@895 -- # return 0 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:16.145 "name": "Existed_Raid", 00:18:16.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.145 "strip_size_kb": 64, 00:18:16.145 "state": "configuring", 00:18:16.145 "raid_level": "raid0", 00:18:16.145 "superblock": false, 00:18:16.145 "num_base_bdevs": 4, 00:18:16.145 "num_base_bdevs_discovered": 2, 00:18:16.145 "num_base_bdevs_operational": 4, 00:18:16.145 "base_bdevs_list": [ 00:18:16.145 { 00:18:16.145 "name": "BaseBdev1", 00:18:16.145 "uuid": "7c819479-a3ba-44fb-aa6e-d1281c630741", 00:18:16.145 "is_configured": true, 00:18:16.145 "data_offset": 0, 00:18:16.145 "data_size": 65536 00:18:16.145 }, 00:18:16.145 { 00:18:16.145 "name": "BaseBdev2", 00:18:16.145 "uuid": "9897ab98-6475-4da1-9e99-06514b829096", 00:18:16.145 "is_configured": true, 00:18:16.145 "data_offset": 0, 00:18:16.145 "data_size": 65536 00:18:16.145 }, 00:18:16.145 { 00:18:16.145 "name": "BaseBdev3", 00:18:16.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.145 "is_configured": false, 00:18:16.145 "data_offset": 0, 00:18:16.145 "data_size": 0 00:18:16.145 }, 00:18:16.145 { 00:18:16.145 "name": "BaseBdev4", 00:18:16.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.145 "is_configured": false, 00:18:16.145 "data_offset": 0, 00:18:16.145 "data_size": 0 00:18:16.145 } 00:18:16.145 ] 00:18:16.145 }' 00:18:16.145 16:33:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:16.145 16:33:52 -- common/autotest_common.sh@10 -- # set +x 00:18:17.077 16:33:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:17.077 [2024-07-11 16:33:53.738083] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:17.077 BaseBdev3 00:18:17.077 16:33:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:17.077 16:33:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:17.077 16:33:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:17.077 16:33:53 -- common/autotest_common.sh@889 -- # local i 00:18:17.077 16:33:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:17.077 16:33:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:17.077 16:33:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:17.335 16:33:53 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:17.593 [ 00:18:17.593 { 00:18:17.593 "name": "BaseBdev3", 00:18:17.593 "aliases": [ 00:18:17.593 "b8b58240-32a4-4117-a6ae-3a1e70bf0206" 00:18:17.593 ], 00:18:17.593 "product_name": "Malloc disk", 00:18:17.593 "block_size": 512, 00:18:17.593 "num_blocks": 65536, 00:18:17.593 "uuid": "b8b58240-32a4-4117-a6ae-3a1e70bf0206", 00:18:17.593 "assigned_rate_limits": { 00:18:17.593 "rw_ios_per_sec": 0, 00:18:17.593 "rw_mbytes_per_sec": 0, 00:18:17.593 "r_mbytes_per_sec": 0, 00:18:17.593 "w_mbytes_per_sec": 0 00:18:17.593 }, 00:18:17.593 "claimed": true, 00:18:17.593 "claim_type": "exclusive_write", 00:18:17.593 "zoned": false, 00:18:17.593 "supported_io_types": { 00:18:17.593 "read": true, 00:18:17.593 "write": true, 00:18:17.593 "unmap": true, 00:18:17.593 "write_zeroes": true, 00:18:17.593 "flush": true, 00:18:17.593 "reset": true, 00:18:17.593 "compare": false, 00:18:17.593 "compare_and_write": false, 00:18:17.593 "abort": true, 00:18:17.593 "nvme_admin": false, 00:18:17.593 "nvme_io": false 00:18:17.593 }, 00:18:17.593 "memory_domains": [ 00:18:17.593 { 00:18:17.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.593 "dma_device_type": 2 00:18:17.593 } 00:18:17.593 ], 00:18:17.593 "driver_specific": {} 00:18:17.593 } 00:18:17.593 ] 00:18:17.593 16:33:54 -- common/autotest_common.sh@895 -- # return 0 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.593 16:33:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.850 16:33:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.850 "name": "Existed_Raid", 00:18:17.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.851 "strip_size_kb": 64, 00:18:17.851 "state": "configuring", 00:18:17.851 "raid_level": "raid0", 00:18:17.851 "superblock": false, 00:18:17.851 "num_base_bdevs": 4, 00:18:17.851 "num_base_bdevs_discovered": 3, 00:18:17.851 "num_base_bdevs_operational": 4, 00:18:17.851 "base_bdevs_list": [ 00:18:17.851 { 00:18:17.851 "name": "BaseBdev1", 00:18:17.851 "uuid": "7c819479-a3ba-44fb-aa6e-d1281c630741", 00:18:17.851 "is_configured": true, 00:18:17.851 "data_offset": 0, 00:18:17.851 "data_size": 65536 00:18:17.851 }, 00:18:17.851 { 00:18:17.851 "name": "BaseBdev2", 00:18:17.851 "uuid": "9897ab98-6475-4da1-9e99-06514b829096", 00:18:17.851 "is_configured": true, 00:18:17.851 "data_offset": 0, 00:18:17.851 "data_size": 65536 00:18:17.851 }, 00:18:17.851 { 00:18:17.851 "name": "BaseBdev3", 00:18:17.851 "uuid": "b8b58240-32a4-4117-a6ae-3a1e70bf0206", 00:18:17.851 "is_configured": true, 00:18:17.851 "data_offset": 0, 00:18:17.851 "data_size": 65536 00:18:17.851 }, 00:18:17.851 { 00:18:17.851 "name": "BaseBdev4", 00:18:17.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.851 "is_configured": false, 00:18:17.851 "data_offset": 0, 00:18:17.851 "data_size": 0 00:18:17.851 } 00:18:17.851 ] 00:18:17.851 }' 00:18:17.851 16:33:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.851 16:33:54 -- common/autotest_common.sh@10 -- # set +x 00:18:18.416 16:33:55 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:18.674 [2024-07-11 16:33:55.289771] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:18.674 [2024-07-11 16:33:55.289816] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:18.674 [2024-07-11 16:33:55.289825] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:18.674 [2024-07-11 16:33:55.289943] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:18.674 [2024-07-11 16:33:55.290591] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:18.674 [2024-07-11 16:33:55.290614] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:18.674 [2024-07-11 16:33:55.291037] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.674 BaseBdev4 00:18:18.674 16:33:55 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:18.674 16:33:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:18.674 16:33:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:18.674 16:33:55 -- common/autotest_common.sh@889 -- # local i 00:18:18.674 16:33:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:18.674 16:33:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:18.674 16:33:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:18.932 16:33:55 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:19.190 [ 00:18:19.190 { 00:18:19.190 "name": "BaseBdev4", 00:18:19.190 "aliases": [ 00:18:19.190 "b07db881-5457-4fb9-8b93-e051ac741de7" 00:18:19.190 ], 00:18:19.190 "product_name": "Malloc disk", 00:18:19.190 "block_size": 512, 00:18:19.190 "num_blocks": 65536, 00:18:19.190 "uuid": "b07db881-5457-4fb9-8b93-e051ac741de7", 00:18:19.190 "assigned_rate_limits": { 00:18:19.190 "rw_ios_per_sec": 0, 00:18:19.190 "rw_mbytes_per_sec": 0, 00:18:19.190 "r_mbytes_per_sec": 0, 00:18:19.190 "w_mbytes_per_sec": 0 00:18:19.190 }, 00:18:19.190 "claimed": true, 00:18:19.190 "claim_type": "exclusive_write", 00:18:19.190 "zoned": false, 00:18:19.190 "supported_io_types": { 00:18:19.190 "read": true, 00:18:19.190 "write": true, 00:18:19.190 "unmap": true, 00:18:19.190 "write_zeroes": true, 00:18:19.190 "flush": true, 00:18:19.190 "reset": true, 00:18:19.190 "compare": false, 00:18:19.190 "compare_and_write": false, 00:18:19.190 "abort": true, 00:18:19.190 "nvme_admin": false, 00:18:19.190 "nvme_io": false 00:18:19.190 }, 00:18:19.190 "memory_domains": [ 00:18:19.190 { 00:18:19.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.190 "dma_device_type": 2 00:18:19.190 } 00:18:19.190 ], 00:18:19.190 "driver_specific": {} 00:18:19.190 } 00:18:19.190 ] 00:18:19.190 16:33:55 -- common/autotest_common.sh@895 -- # return 0 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:19.190 "name": "Existed_Raid", 00:18:19.190 "uuid": "0f4e5653-3a77-49a6-9567-0079fb9d2e94", 00:18:19.190 "strip_size_kb": 64, 00:18:19.190 "state": "online", 00:18:19.190 "raid_level": "raid0", 00:18:19.190 "superblock": false, 00:18:19.190 "num_base_bdevs": 4, 00:18:19.190 "num_base_bdevs_discovered": 4, 00:18:19.190 "num_base_bdevs_operational": 4, 00:18:19.190 "base_bdevs_list": [ 00:18:19.190 { 00:18:19.190 "name": "BaseBdev1", 00:18:19.190 "uuid": "7c819479-a3ba-44fb-aa6e-d1281c630741", 00:18:19.190 "is_configured": true, 00:18:19.190 "data_offset": 0, 00:18:19.190 "data_size": 65536 00:18:19.190 }, 00:18:19.190 { 00:18:19.190 "name": "BaseBdev2", 00:18:19.190 "uuid": "9897ab98-6475-4da1-9e99-06514b829096", 00:18:19.190 "is_configured": true, 00:18:19.190 "data_offset": 0, 00:18:19.190 "data_size": 65536 00:18:19.190 }, 00:18:19.190 { 00:18:19.190 "name": "BaseBdev3", 00:18:19.190 "uuid": "b8b58240-32a4-4117-a6ae-3a1e70bf0206", 00:18:19.190 "is_configured": true, 00:18:19.190 "data_offset": 0, 00:18:19.190 "data_size": 65536 00:18:19.190 }, 00:18:19.190 { 00:18:19.190 "name": "BaseBdev4", 00:18:19.190 "uuid": "b07db881-5457-4fb9-8b93-e051ac741de7", 00:18:19.190 "is_configured": true, 00:18:19.190 "data_offset": 0, 00:18:19.190 "data_size": 65536 00:18:19.190 } 00:18:19.190 ] 00:18:19.190 }' 00:18:19.190 16:33:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:19.190 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:20.124 [2024-07-11 16:33:56.834190] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:20.124 [2024-07-11 16:33:56.834221] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.124 [2024-07-11 16:33:56.834283] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.124 16:33:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.382 16:33:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.382 "name": "Existed_Raid", 00:18:20.382 "uuid": "0f4e5653-3a77-49a6-9567-0079fb9d2e94", 00:18:20.382 "strip_size_kb": 64, 00:18:20.382 "state": "offline", 00:18:20.382 "raid_level": "raid0", 00:18:20.382 "superblock": false, 00:18:20.382 "num_base_bdevs": 4, 00:18:20.382 "num_base_bdevs_discovered": 3, 00:18:20.382 "num_base_bdevs_operational": 3, 00:18:20.382 "base_bdevs_list": [ 00:18:20.382 { 00:18:20.382 "name": null, 00:18:20.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.382 "is_configured": false, 00:18:20.382 "data_offset": 0, 00:18:20.382 "data_size": 65536 00:18:20.382 }, 00:18:20.382 { 00:18:20.382 "name": "BaseBdev2", 00:18:20.382 "uuid": "9897ab98-6475-4da1-9e99-06514b829096", 00:18:20.382 "is_configured": true, 00:18:20.382 "data_offset": 0, 00:18:20.382 "data_size": 65536 00:18:20.382 }, 00:18:20.382 { 00:18:20.382 "name": "BaseBdev3", 00:18:20.382 "uuid": "b8b58240-32a4-4117-a6ae-3a1e70bf0206", 00:18:20.382 "is_configured": true, 00:18:20.382 "data_offset": 0, 00:18:20.382 "data_size": 65536 00:18:20.382 }, 00:18:20.382 { 00:18:20.382 "name": "BaseBdev4", 00:18:20.382 "uuid": "b07db881-5457-4fb9-8b93-e051ac741de7", 00:18:20.382 "is_configured": true, 00:18:20.382 "data_offset": 0, 00:18:20.382 "data_size": 65536 00:18:20.382 } 00:18:20.382 ] 00:18:20.382 }' 00:18:20.382 16:33:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.382 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:18:21.317 16:33:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:21.317 16:33:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:21.317 16:33:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.317 16:33:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:21.317 16:33:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:21.317 16:33:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:21.317 16:33:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:21.574 [2024-07-11 16:33:58.327956] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:21.832 16:33:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:21.832 16:33:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:21.832 16:33:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.832 16:33:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:21.832 16:33:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:21.832 16:33:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:21.832 16:33:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:22.090 [2024-07-11 16:33:58.822545] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:22.090 16:33:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:22.090 16:33:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:22.090 16:33:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.090 16:33:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:22.348 16:33:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:22.348 16:33:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:22.348 16:33:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:22.607 [2024-07-11 16:33:59.289405] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:22.607 [2024-07-11 16:33:59.289519] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:22.607 16:33:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:22.607 16:33:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:22.607 16:33:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.607 16:33:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:22.865 16:33:59 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:22.865 16:33:59 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:22.865 16:33:59 -- bdev/bdev_raid.sh@287 -- # killprocess 121372 00:18:22.865 16:33:59 -- common/autotest_common.sh@926 -- # '[' -z 121372 ']' 00:18:22.865 16:33:59 -- common/autotest_common.sh@930 -- # kill -0 121372 00:18:22.865 16:33:59 -- common/autotest_common.sh@931 -- # uname 00:18:22.865 16:33:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:22.865 16:33:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121372 00:18:22.865 killing process with pid 121372 00:18:22.865 16:33:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:22.865 16:33:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:22.865 16:33:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121372' 00:18:22.865 16:33:59 -- common/autotest_common.sh@945 -- # kill 121372 00:18:22.865 16:33:59 -- common/autotest_common.sh@950 -- # wait 121372 00:18:22.865 [2024-07-11 16:33:59.587025] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:22.865 [2024-07-11 16:33:59.587186] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:23.799 ************************************ 00:18:23.799 END TEST raid_state_function_test 00:18:23.799 ************************************ 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:23.799 00:18:23.799 real 0m13.686s 00:18:23.799 user 0m24.801s 00:18:23.799 sys 0m1.368s 00:18:23.799 16:34:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:23.799 16:34:00 -- common/autotest_common.sh@10 -- # set +x 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:18:23.799 16:34:00 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:23.799 16:34:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:23.799 16:34:00 -- common/autotest_common.sh@10 -- # set +x 00:18:23.799 ************************************ 00:18:23.799 START TEST raid_state_function_test_sb 00:18:23.799 ************************************ 00:18:23.799 16:34:00 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@226 -- # raid_pid=121834 00:18:23.799 Process raid pid: 121834 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121834' 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:23.799 16:34:00 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121834 /var/tmp/spdk-raid.sock 00:18:23.799 16:34:00 -- common/autotest_common.sh@819 -- # '[' -z 121834 ']' 00:18:23.799 16:34:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:23.799 16:34:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:23.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:23.799 16:34:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:23.799 16:34:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:23.799 16:34:00 -- common/autotest_common.sh@10 -- # set +x 00:18:23.799 [2024-07-11 16:34:00.606375] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:23.799 [2024-07-11 16:34:00.606565] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.057 [2024-07-11 16:34:00.773167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.315 [2024-07-11 16:34:00.964403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.604 [2024-07-11 16:34:01.129346] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:24.882 16:34:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:24.882 16:34:01 -- common/autotest_common.sh@852 -- # return 0 00:18:24.882 16:34:01 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:25.151 [2024-07-11 16:34:01.699184] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:25.151 [2024-07-11 16:34:01.699283] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:25.151 [2024-07-11 16:34:01.699296] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:25.151 [2024-07-11 16:34:01.699319] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:25.151 [2024-07-11 16:34:01.699326] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:25.151 [2024-07-11 16:34:01.699408] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:25.151 [2024-07-11 16:34:01.699417] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:25.151 [2024-07-11 16:34:01.699439] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.151 "name": "Existed_Raid", 00:18:25.151 "uuid": "18a1d21d-fc11-4171-84ad-dbdbff5c5ea5", 00:18:25.151 "strip_size_kb": 64, 00:18:25.151 "state": "configuring", 00:18:25.151 "raid_level": "raid0", 00:18:25.151 "superblock": true, 00:18:25.151 "num_base_bdevs": 4, 00:18:25.151 "num_base_bdevs_discovered": 0, 00:18:25.151 "num_base_bdevs_operational": 4, 00:18:25.151 "base_bdevs_list": [ 00:18:25.151 { 00:18:25.151 "name": "BaseBdev1", 00:18:25.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.151 "is_configured": false, 00:18:25.151 "data_offset": 0, 00:18:25.151 "data_size": 0 00:18:25.151 }, 00:18:25.151 { 00:18:25.151 "name": "BaseBdev2", 00:18:25.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.151 "is_configured": false, 00:18:25.151 "data_offset": 0, 00:18:25.151 "data_size": 0 00:18:25.151 }, 00:18:25.151 { 00:18:25.151 "name": "BaseBdev3", 00:18:25.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.151 "is_configured": false, 00:18:25.151 "data_offset": 0, 00:18:25.151 "data_size": 0 00:18:25.151 }, 00:18:25.151 { 00:18:25.151 "name": "BaseBdev4", 00:18:25.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.151 "is_configured": false, 00:18:25.151 "data_offset": 0, 00:18:25.151 "data_size": 0 00:18:25.151 } 00:18:25.151 ] 00:18:25.151 }' 00:18:25.151 16:34:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.151 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:18:25.719 16:34:02 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:25.978 [2024-07-11 16:34:02.687205] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.978 [2024-07-11 16:34:02.687236] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:25.978 16:34:02 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:26.237 [2024-07-11 16:34:02.863297] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:26.237 [2024-07-11 16:34:02.863375] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:26.237 [2024-07-11 16:34:02.863401] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.237 [2024-07-11 16:34:02.863429] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.237 [2024-07-11 16:34:02.863437] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:26.237 [2024-07-11 16:34:02.863483] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:26.237 [2024-07-11 16:34:02.863505] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:26.237 [2024-07-11 16:34:02.863544] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:26.237 16:34:02 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:26.494 [2024-07-11 16:34:03.120594] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.494 BaseBdev1 00:18:26.494 16:34:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:26.494 16:34:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:26.494 16:34:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:26.494 16:34:03 -- common/autotest_common.sh@889 -- # local i 00:18:26.494 16:34:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:26.494 16:34:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:26.494 16:34:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.752 16:34:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:26.752 [ 00:18:26.752 { 00:18:26.752 "name": "BaseBdev1", 00:18:26.752 "aliases": [ 00:18:26.752 "b1dbb1dc-a136-4af2-8821-337a42b56ced" 00:18:26.752 ], 00:18:26.752 "product_name": "Malloc disk", 00:18:26.752 "block_size": 512, 00:18:26.752 "num_blocks": 65536, 00:18:26.752 "uuid": "b1dbb1dc-a136-4af2-8821-337a42b56ced", 00:18:26.752 "assigned_rate_limits": { 00:18:26.752 "rw_ios_per_sec": 0, 00:18:26.752 "rw_mbytes_per_sec": 0, 00:18:26.752 "r_mbytes_per_sec": 0, 00:18:26.752 "w_mbytes_per_sec": 0 00:18:26.752 }, 00:18:26.752 "claimed": true, 00:18:26.752 "claim_type": "exclusive_write", 00:18:26.752 "zoned": false, 00:18:26.752 "supported_io_types": { 00:18:26.752 "read": true, 00:18:26.752 "write": true, 00:18:26.752 "unmap": true, 00:18:26.752 "write_zeroes": true, 00:18:26.752 "flush": true, 00:18:26.752 "reset": true, 00:18:26.752 "compare": false, 00:18:26.752 "compare_and_write": false, 00:18:26.752 "abort": true, 00:18:26.752 "nvme_admin": false, 00:18:26.752 "nvme_io": false 00:18:26.752 }, 00:18:26.752 "memory_domains": [ 00:18:26.752 { 00:18:26.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.752 "dma_device_type": 2 00:18:26.752 } 00:18:26.752 ], 00:18:26.752 "driver_specific": {} 00:18:26.752 } 00:18:26.752 ] 00:18:26.752 16:34:03 -- common/autotest_common.sh@895 -- # return 0 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.752 16:34:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.010 16:34:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.010 "name": "Existed_Raid", 00:18:27.010 "uuid": "8e7c1ed4-39d9-4a48-ac0b-150dc86c05f7", 00:18:27.010 "strip_size_kb": 64, 00:18:27.010 "state": "configuring", 00:18:27.010 "raid_level": "raid0", 00:18:27.010 "superblock": true, 00:18:27.010 "num_base_bdevs": 4, 00:18:27.010 "num_base_bdevs_discovered": 1, 00:18:27.010 "num_base_bdevs_operational": 4, 00:18:27.010 "base_bdevs_list": [ 00:18:27.010 { 00:18:27.010 "name": "BaseBdev1", 00:18:27.010 "uuid": "b1dbb1dc-a136-4af2-8821-337a42b56ced", 00:18:27.010 "is_configured": true, 00:18:27.010 "data_offset": 2048, 00:18:27.010 "data_size": 63488 00:18:27.010 }, 00:18:27.010 { 00:18:27.010 "name": "BaseBdev2", 00:18:27.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.010 "is_configured": false, 00:18:27.010 "data_offset": 0, 00:18:27.010 "data_size": 0 00:18:27.010 }, 00:18:27.010 { 00:18:27.010 "name": "BaseBdev3", 00:18:27.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.010 "is_configured": false, 00:18:27.010 "data_offset": 0, 00:18:27.010 "data_size": 0 00:18:27.010 }, 00:18:27.010 { 00:18:27.010 "name": "BaseBdev4", 00:18:27.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.010 "is_configured": false, 00:18:27.010 "data_offset": 0, 00:18:27.010 "data_size": 0 00:18:27.010 } 00:18:27.010 ] 00:18:27.010 }' 00:18:27.010 16:34:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.010 16:34:03 -- common/autotest_common.sh@10 -- # set +x 00:18:27.576 16:34:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:27.834 [2024-07-11 16:34:04.508848] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:27.834 [2024-07-11 16:34:04.508889] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:27.834 16:34:04 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:27.834 16:34:04 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:28.119 16:34:04 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:28.385 BaseBdev1 00:18:28.385 16:34:04 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:28.385 16:34:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:28.385 16:34:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:28.385 16:34:04 -- common/autotest_common.sh@889 -- # local i 00:18:28.385 16:34:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:28.385 16:34:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:28.385 16:34:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.648 16:34:05 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:28.648 [ 00:18:28.648 { 00:18:28.648 "name": "BaseBdev1", 00:18:28.648 "aliases": [ 00:18:28.648 "128c0922-d618-4664-a55a-a4bfba3b4f0c" 00:18:28.648 ], 00:18:28.648 "product_name": "Malloc disk", 00:18:28.648 "block_size": 512, 00:18:28.648 "num_blocks": 65536, 00:18:28.648 "uuid": "128c0922-d618-4664-a55a-a4bfba3b4f0c", 00:18:28.648 "assigned_rate_limits": { 00:18:28.648 "rw_ios_per_sec": 0, 00:18:28.648 "rw_mbytes_per_sec": 0, 00:18:28.648 "r_mbytes_per_sec": 0, 00:18:28.648 "w_mbytes_per_sec": 0 00:18:28.648 }, 00:18:28.648 "claimed": false, 00:18:28.648 "zoned": false, 00:18:28.648 "supported_io_types": { 00:18:28.648 "read": true, 00:18:28.649 "write": true, 00:18:28.649 "unmap": true, 00:18:28.649 "write_zeroes": true, 00:18:28.649 "flush": true, 00:18:28.649 "reset": true, 00:18:28.649 "compare": false, 00:18:28.649 "compare_and_write": false, 00:18:28.649 "abort": true, 00:18:28.649 "nvme_admin": false, 00:18:28.649 "nvme_io": false 00:18:28.649 }, 00:18:28.649 "memory_domains": [ 00:18:28.649 { 00:18:28.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.649 "dma_device_type": 2 00:18:28.649 } 00:18:28.649 ], 00:18:28.649 "driver_specific": {} 00:18:28.649 } 00:18:28.649 ] 00:18:28.649 16:34:05 -- common/autotest_common.sh@895 -- # return 0 00:18:28.649 16:34:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:28.906 [2024-07-11 16:34:05.563213] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.906 [2024-07-11 16:34:05.564881] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:28.906 [2024-07-11 16:34:05.565003] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:28.906 [2024-07-11 16:34:05.565018] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:28.906 [2024-07-11 16:34:05.565045] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:28.906 [2024-07-11 16:34:05.565055] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:28.906 [2024-07-11 16:34:05.565072] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.906 16:34:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.164 16:34:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.164 "name": "Existed_Raid", 00:18:29.164 "uuid": "e56d0db1-d484-4456-895f-e01846cf6502", 00:18:29.164 "strip_size_kb": 64, 00:18:29.164 "state": "configuring", 00:18:29.164 "raid_level": "raid0", 00:18:29.164 "superblock": true, 00:18:29.164 "num_base_bdevs": 4, 00:18:29.164 "num_base_bdevs_discovered": 1, 00:18:29.164 "num_base_bdevs_operational": 4, 00:18:29.164 "base_bdevs_list": [ 00:18:29.164 { 00:18:29.164 "name": "BaseBdev1", 00:18:29.164 "uuid": "128c0922-d618-4664-a55a-a4bfba3b4f0c", 00:18:29.164 "is_configured": true, 00:18:29.164 "data_offset": 2048, 00:18:29.164 "data_size": 63488 00:18:29.164 }, 00:18:29.164 { 00:18:29.164 "name": "BaseBdev2", 00:18:29.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.164 "is_configured": false, 00:18:29.164 "data_offset": 0, 00:18:29.164 "data_size": 0 00:18:29.164 }, 00:18:29.164 { 00:18:29.164 "name": "BaseBdev3", 00:18:29.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.164 "is_configured": false, 00:18:29.164 "data_offset": 0, 00:18:29.164 "data_size": 0 00:18:29.164 }, 00:18:29.164 { 00:18:29.164 "name": "BaseBdev4", 00:18:29.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.164 "is_configured": false, 00:18:29.164 "data_offset": 0, 00:18:29.164 "data_size": 0 00:18:29.164 } 00:18:29.164 ] 00:18:29.164 }' 00:18:29.164 16:34:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.164 16:34:05 -- common/autotest_common.sh@10 -- # set +x 00:18:29.733 16:34:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:29.992 [2024-07-11 16:34:06.657614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.992 BaseBdev2 00:18:29.992 16:34:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:29.992 16:34:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:29.992 16:34:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:29.992 16:34:06 -- common/autotest_common.sh@889 -- # local i 00:18:29.992 16:34:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:29.992 16:34:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:29.992 16:34:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:30.251 16:34:06 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:30.251 [ 00:18:30.251 { 00:18:30.251 "name": "BaseBdev2", 00:18:30.251 "aliases": [ 00:18:30.251 "3cd012ac-ef4f-4ddc-9be0-b271e7d06aa5" 00:18:30.251 ], 00:18:30.251 "product_name": "Malloc disk", 00:18:30.251 "block_size": 512, 00:18:30.251 "num_blocks": 65536, 00:18:30.251 "uuid": "3cd012ac-ef4f-4ddc-9be0-b271e7d06aa5", 00:18:30.251 "assigned_rate_limits": { 00:18:30.251 "rw_ios_per_sec": 0, 00:18:30.251 "rw_mbytes_per_sec": 0, 00:18:30.251 "r_mbytes_per_sec": 0, 00:18:30.251 "w_mbytes_per_sec": 0 00:18:30.251 }, 00:18:30.251 "claimed": true, 00:18:30.251 "claim_type": "exclusive_write", 00:18:30.251 "zoned": false, 00:18:30.251 "supported_io_types": { 00:18:30.251 "read": true, 00:18:30.251 "write": true, 00:18:30.251 "unmap": true, 00:18:30.251 "write_zeroes": true, 00:18:30.251 "flush": true, 00:18:30.251 "reset": true, 00:18:30.251 "compare": false, 00:18:30.251 "compare_and_write": false, 00:18:30.251 "abort": true, 00:18:30.251 "nvme_admin": false, 00:18:30.251 "nvme_io": false 00:18:30.251 }, 00:18:30.251 "memory_domains": [ 00:18:30.251 { 00:18:30.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.251 "dma_device_type": 2 00:18:30.251 } 00:18:30.251 ], 00:18:30.251 "driver_specific": {} 00:18:30.251 } 00:18:30.251 ] 00:18:30.251 16:34:07 -- common/autotest_common.sh@895 -- # return 0 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.251 16:34:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.510 16:34:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.510 "name": "Existed_Raid", 00:18:30.510 "uuid": "e56d0db1-d484-4456-895f-e01846cf6502", 00:18:30.510 "strip_size_kb": 64, 00:18:30.510 "state": "configuring", 00:18:30.510 "raid_level": "raid0", 00:18:30.510 "superblock": true, 00:18:30.510 "num_base_bdevs": 4, 00:18:30.510 "num_base_bdevs_discovered": 2, 00:18:30.510 "num_base_bdevs_operational": 4, 00:18:30.510 "base_bdevs_list": [ 00:18:30.510 { 00:18:30.510 "name": "BaseBdev1", 00:18:30.510 "uuid": "128c0922-d618-4664-a55a-a4bfba3b4f0c", 00:18:30.510 "is_configured": true, 00:18:30.510 "data_offset": 2048, 00:18:30.510 "data_size": 63488 00:18:30.510 }, 00:18:30.510 { 00:18:30.510 "name": "BaseBdev2", 00:18:30.510 "uuid": "3cd012ac-ef4f-4ddc-9be0-b271e7d06aa5", 00:18:30.510 "is_configured": true, 00:18:30.510 "data_offset": 2048, 00:18:30.510 "data_size": 63488 00:18:30.510 }, 00:18:30.510 { 00:18:30.510 "name": "BaseBdev3", 00:18:30.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.510 "is_configured": false, 00:18:30.510 "data_offset": 0, 00:18:30.510 "data_size": 0 00:18:30.510 }, 00:18:30.510 { 00:18:30.510 "name": "BaseBdev4", 00:18:30.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.510 "is_configured": false, 00:18:30.510 "data_offset": 0, 00:18:30.510 "data_size": 0 00:18:30.510 } 00:18:30.510 ] 00:18:30.510 }' 00:18:30.510 16:34:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.510 16:34:07 -- common/autotest_common.sh@10 -- # set +x 00:18:31.078 16:34:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:31.337 [2024-07-11 16:34:08.076919] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.337 BaseBdev3 00:18:31.337 16:34:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:31.337 16:34:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:31.337 16:34:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:31.337 16:34:08 -- common/autotest_common.sh@889 -- # local i 00:18:31.337 16:34:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:31.337 16:34:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:31.337 16:34:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:31.596 16:34:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:31.855 [ 00:18:31.855 { 00:18:31.855 "name": "BaseBdev3", 00:18:31.855 "aliases": [ 00:18:31.855 "a3bf561f-b104-4682-ad4d-442bfc77fc1f" 00:18:31.855 ], 00:18:31.855 "product_name": "Malloc disk", 00:18:31.855 "block_size": 512, 00:18:31.855 "num_blocks": 65536, 00:18:31.855 "uuid": "a3bf561f-b104-4682-ad4d-442bfc77fc1f", 00:18:31.855 "assigned_rate_limits": { 00:18:31.855 "rw_ios_per_sec": 0, 00:18:31.855 "rw_mbytes_per_sec": 0, 00:18:31.855 "r_mbytes_per_sec": 0, 00:18:31.855 "w_mbytes_per_sec": 0 00:18:31.855 }, 00:18:31.855 "claimed": true, 00:18:31.855 "claim_type": "exclusive_write", 00:18:31.855 "zoned": false, 00:18:31.855 "supported_io_types": { 00:18:31.855 "read": true, 00:18:31.855 "write": true, 00:18:31.855 "unmap": true, 00:18:31.855 "write_zeroes": true, 00:18:31.855 "flush": true, 00:18:31.855 "reset": true, 00:18:31.855 "compare": false, 00:18:31.855 "compare_and_write": false, 00:18:31.855 "abort": true, 00:18:31.855 "nvme_admin": false, 00:18:31.855 "nvme_io": false 00:18:31.855 }, 00:18:31.855 "memory_domains": [ 00:18:31.855 { 00:18:31.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.855 "dma_device_type": 2 00:18:31.855 } 00:18:31.855 ], 00:18:31.855 "driver_specific": {} 00:18:31.855 } 00:18:31.855 ] 00:18:31.855 16:34:08 -- common/autotest_common.sh@895 -- # return 0 00:18:31.855 16:34:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:31.855 16:34:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:31.855 16:34:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:18:31.855 16:34:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:31.855 16:34:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:31.856 16:34:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:31.856 16:34:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:31.856 16:34:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:31.856 16:34:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.856 16:34:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.856 16:34:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.856 16:34:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.856 16:34:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.856 16:34:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.856 16:34:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.856 "name": "Existed_Raid", 00:18:31.856 "uuid": "e56d0db1-d484-4456-895f-e01846cf6502", 00:18:31.856 "strip_size_kb": 64, 00:18:31.856 "state": "configuring", 00:18:31.856 "raid_level": "raid0", 00:18:31.856 "superblock": true, 00:18:31.856 "num_base_bdevs": 4, 00:18:31.856 "num_base_bdevs_discovered": 3, 00:18:31.856 "num_base_bdevs_operational": 4, 00:18:31.856 "base_bdevs_list": [ 00:18:31.856 { 00:18:31.856 "name": "BaseBdev1", 00:18:31.856 "uuid": "128c0922-d618-4664-a55a-a4bfba3b4f0c", 00:18:31.856 "is_configured": true, 00:18:31.856 "data_offset": 2048, 00:18:31.856 "data_size": 63488 00:18:31.856 }, 00:18:31.856 { 00:18:31.856 "name": "BaseBdev2", 00:18:31.856 "uuid": "3cd012ac-ef4f-4ddc-9be0-b271e7d06aa5", 00:18:31.856 "is_configured": true, 00:18:31.856 "data_offset": 2048, 00:18:31.856 "data_size": 63488 00:18:31.856 }, 00:18:31.856 { 00:18:31.856 "name": "BaseBdev3", 00:18:31.856 "uuid": "a3bf561f-b104-4682-ad4d-442bfc77fc1f", 00:18:31.856 "is_configured": true, 00:18:31.856 "data_offset": 2048, 00:18:31.856 "data_size": 63488 00:18:31.856 }, 00:18:31.856 { 00:18:31.856 "name": "BaseBdev4", 00:18:31.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.856 "is_configured": false, 00:18:31.856 "data_offset": 0, 00:18:31.856 "data_size": 0 00:18:31.856 } 00:18:31.856 ] 00:18:31.856 }' 00:18:31.856 16:34:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.856 16:34:08 -- common/autotest_common.sh@10 -- # set +x 00:18:32.792 16:34:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:32.792 [2024-07-11 16:34:09.480552] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:32.792 [2024-07-11 16:34:09.480787] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:32.792 [2024-07-11 16:34:09.480800] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:32.792 [2024-07-11 16:34:09.480976] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:32.792 BaseBdev4 00:18:32.792 [2024-07-11 16:34:09.481331] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:32.792 [2024-07-11 16:34:09.481370] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:32.792 [2024-07-11 16:34:09.481526] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.792 16:34:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:32.792 16:34:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:32.792 16:34:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:32.792 16:34:09 -- common/autotest_common.sh@889 -- # local i 00:18:32.792 16:34:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:32.792 16:34:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:32.792 16:34:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.051 16:34:09 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:33.309 [ 00:18:33.309 { 00:18:33.309 "name": "BaseBdev4", 00:18:33.309 "aliases": [ 00:18:33.309 "0c6681e0-d982-4bad-8a0c-9aca3ce9edc0" 00:18:33.309 ], 00:18:33.309 "product_name": "Malloc disk", 00:18:33.309 "block_size": 512, 00:18:33.309 "num_blocks": 65536, 00:18:33.309 "uuid": "0c6681e0-d982-4bad-8a0c-9aca3ce9edc0", 00:18:33.309 "assigned_rate_limits": { 00:18:33.309 "rw_ios_per_sec": 0, 00:18:33.309 "rw_mbytes_per_sec": 0, 00:18:33.309 "r_mbytes_per_sec": 0, 00:18:33.309 "w_mbytes_per_sec": 0 00:18:33.309 }, 00:18:33.309 "claimed": true, 00:18:33.309 "claim_type": "exclusive_write", 00:18:33.309 "zoned": false, 00:18:33.309 "supported_io_types": { 00:18:33.309 "read": true, 00:18:33.309 "write": true, 00:18:33.309 "unmap": true, 00:18:33.309 "write_zeroes": true, 00:18:33.309 "flush": true, 00:18:33.309 "reset": true, 00:18:33.309 "compare": false, 00:18:33.309 "compare_and_write": false, 00:18:33.309 "abort": true, 00:18:33.309 "nvme_admin": false, 00:18:33.309 "nvme_io": false 00:18:33.309 }, 00:18:33.309 "memory_domains": [ 00:18:33.309 { 00:18:33.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.309 "dma_device_type": 2 00:18:33.309 } 00:18:33.309 ], 00:18:33.309 "driver_specific": {} 00:18:33.309 } 00:18:33.309 ] 00:18:33.309 16:34:09 -- common/autotest_common.sh@895 -- # return 0 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.309 16:34:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.309 16:34:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:33.310 "name": "Existed_Raid", 00:18:33.310 "uuid": "e56d0db1-d484-4456-895f-e01846cf6502", 00:18:33.310 "strip_size_kb": 64, 00:18:33.310 "state": "online", 00:18:33.310 "raid_level": "raid0", 00:18:33.310 "superblock": true, 00:18:33.310 "num_base_bdevs": 4, 00:18:33.310 "num_base_bdevs_discovered": 4, 00:18:33.310 "num_base_bdevs_operational": 4, 00:18:33.310 "base_bdevs_list": [ 00:18:33.310 { 00:18:33.310 "name": "BaseBdev1", 00:18:33.310 "uuid": "128c0922-d618-4664-a55a-a4bfba3b4f0c", 00:18:33.310 "is_configured": true, 00:18:33.310 "data_offset": 2048, 00:18:33.310 "data_size": 63488 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "name": "BaseBdev2", 00:18:33.310 "uuid": "3cd012ac-ef4f-4ddc-9be0-b271e7d06aa5", 00:18:33.310 "is_configured": true, 00:18:33.310 "data_offset": 2048, 00:18:33.310 "data_size": 63488 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "name": "BaseBdev3", 00:18:33.310 "uuid": "a3bf561f-b104-4682-ad4d-442bfc77fc1f", 00:18:33.310 "is_configured": true, 00:18:33.310 "data_offset": 2048, 00:18:33.310 "data_size": 63488 00:18:33.310 }, 00:18:33.310 { 00:18:33.310 "name": "BaseBdev4", 00:18:33.310 "uuid": "0c6681e0-d982-4bad-8a0c-9aca3ce9edc0", 00:18:33.310 "is_configured": true, 00:18:33.310 "data_offset": 2048, 00:18:33.310 "data_size": 63488 00:18:33.310 } 00:18:33.310 ] 00:18:33.310 }' 00:18:33.310 16:34:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:33.310 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:18:33.877 16:34:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:34.137 [2024-07-11 16:34:10.841562] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:34.137 [2024-07-11 16:34:10.841597] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.137 [2024-07-11 16:34:10.841680] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.137 16:34:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.395 16:34:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:34.395 "name": "Existed_Raid", 00:18:34.395 "uuid": "e56d0db1-d484-4456-895f-e01846cf6502", 00:18:34.395 "strip_size_kb": 64, 00:18:34.395 "state": "offline", 00:18:34.395 "raid_level": "raid0", 00:18:34.395 "superblock": true, 00:18:34.395 "num_base_bdevs": 4, 00:18:34.395 "num_base_bdevs_discovered": 3, 00:18:34.395 "num_base_bdevs_operational": 3, 00:18:34.395 "base_bdevs_list": [ 00:18:34.395 { 00:18:34.395 "name": null, 00:18:34.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.395 "is_configured": false, 00:18:34.395 "data_offset": 2048, 00:18:34.395 "data_size": 63488 00:18:34.395 }, 00:18:34.395 { 00:18:34.395 "name": "BaseBdev2", 00:18:34.395 "uuid": "3cd012ac-ef4f-4ddc-9be0-b271e7d06aa5", 00:18:34.395 "is_configured": true, 00:18:34.396 "data_offset": 2048, 00:18:34.396 "data_size": 63488 00:18:34.396 }, 00:18:34.396 { 00:18:34.396 "name": "BaseBdev3", 00:18:34.396 "uuid": "a3bf561f-b104-4682-ad4d-442bfc77fc1f", 00:18:34.396 "is_configured": true, 00:18:34.396 "data_offset": 2048, 00:18:34.396 "data_size": 63488 00:18:34.396 }, 00:18:34.396 { 00:18:34.396 "name": "BaseBdev4", 00:18:34.396 "uuid": "0c6681e0-d982-4bad-8a0c-9aca3ce9edc0", 00:18:34.396 "is_configured": true, 00:18:34.396 "data_offset": 2048, 00:18:34.396 "data_size": 63488 00:18:34.396 } 00:18:34.396 ] 00:18:34.396 }' 00:18:34.396 16:34:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:34.396 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:18:35.330 16:34:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:35.331 16:34:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:35.331 16:34:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.331 16:34:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:35.331 16:34:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:35.331 16:34:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:35.331 16:34:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:35.590 [2024-07-11 16:34:12.284887] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:35.590 16:34:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:35.590 16:34:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:35.590 16:34:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.590 16:34:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:35.847 16:34:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:35.847 16:34:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:35.847 16:34:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:36.106 [2024-07-11 16:34:12.775908] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:36.106 16:34:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:36.106 16:34:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:36.106 16:34:12 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.106 16:34:12 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:36.364 16:34:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:36.364 16:34:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:36.364 16:34:13 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:36.622 [2024-07-11 16:34:13.251167] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:36.623 [2024-07-11 16:34:13.251229] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:36.623 16:34:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:36.623 16:34:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:36.623 16:34:13 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.623 16:34:13 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:36.881 16:34:13 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:36.881 16:34:13 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:36.881 16:34:13 -- bdev/bdev_raid.sh@287 -- # killprocess 121834 00:18:36.881 16:34:13 -- common/autotest_common.sh@926 -- # '[' -z 121834 ']' 00:18:36.881 16:34:13 -- common/autotest_common.sh@930 -- # kill -0 121834 00:18:36.881 16:34:13 -- common/autotest_common.sh@931 -- # uname 00:18:36.881 16:34:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:36.881 16:34:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 121834 00:18:36.881 killing process with pid 121834 00:18:36.881 16:34:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:36.881 16:34:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:36.881 16:34:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 121834' 00:18:36.881 16:34:13 -- common/autotest_common.sh@945 -- # kill 121834 00:18:36.881 16:34:13 -- common/autotest_common.sh@950 -- # wait 121834 00:18:36.881 [2024-07-11 16:34:13.593686] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:36.881 [2024-07-11 16:34:13.593862] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.837 ************************************ 00:18:37.837 END TEST raid_state_function_test_sb 00:18:37.837 ************************************ 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:37.837 00:18:37.837 real 0m13.950s 00:18:37.837 user 0m25.178s 00:18:37.837 sys 0m1.481s 00:18:37.837 16:34:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.837 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:18:37.837 16:34:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:37.837 16:34:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:37.837 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:18:37.837 ************************************ 00:18:37.837 START TEST raid_superblock_test 00:18:37.837 ************************************ 00:18:37.837 16:34:14 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:37.837 16:34:14 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:37.838 16:34:14 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:18:37.838 16:34:14 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:37.838 16:34:14 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:37.838 16:34:14 -- bdev/bdev_raid.sh@357 -- # raid_pid=122285 00:18:37.838 16:34:14 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122285 /var/tmp/spdk-raid.sock 00:18:37.838 16:34:14 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:37.838 16:34:14 -- common/autotest_common.sh@819 -- # '[' -z 122285 ']' 00:18:37.838 16:34:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:37.838 16:34:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:37.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:37.838 16:34:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:37.838 16:34:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:37.838 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:18:37.838 [2024-07-11 16:34:14.589611] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:37.838 [2024-07-11 16:34:14.589751] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122285 ] 00:18:38.107 [2024-07-11 16:34:14.742567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.372 [2024-07-11 16:34:14.945904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.372 [2024-07-11 16:34:15.107431] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:38.939 16:34:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:38.939 16:34:15 -- common/autotest_common.sh@852 -- # return 0 00:18:38.939 16:34:15 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:38.939 16:34:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:38.939 16:34:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:38.939 16:34:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:38.939 16:34:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:38.939 16:34:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:38.939 16:34:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:38.939 16:34:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:38.939 16:34:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:38.939 malloc1 00:18:38.939 16:34:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:39.198 [2024-07-11 16:34:15.923360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:39.198 [2024-07-11 16:34:15.923433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.198 [2024-07-11 16:34:15.923462] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:39.198 [2024-07-11 16:34:15.923509] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.198 [2024-07-11 16:34:15.925402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.198 [2024-07-11 16:34:15.925443] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:39.198 pt1 00:18:39.198 16:34:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:39.198 16:34:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:39.198 16:34:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:39.198 16:34:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:39.198 16:34:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:39.198 16:34:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:39.198 16:34:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:39.198 16:34:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:39.198 16:34:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:39.456 malloc2 00:18:39.456 16:34:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:39.714 [2024-07-11 16:34:16.338594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:39.714 [2024-07-11 16:34:16.338657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.714 [2024-07-11 16:34:16.338694] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:39.714 [2024-07-11 16:34:16.338740] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.714 [2024-07-11 16:34:16.340579] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.714 [2024-07-11 16:34:16.340620] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:39.714 pt2 00:18:39.714 16:34:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:39.714 16:34:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:39.714 16:34:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:39.714 16:34:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:39.714 16:34:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:39.714 16:34:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:39.715 16:34:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:39.715 16:34:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:39.715 16:34:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:39.973 malloc3 00:18:39.973 16:34:16 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:39.973 [2024-07-11 16:34:16.747216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:39.973 [2024-07-11 16:34:16.747280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.973 [2024-07-11 16:34:16.747315] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:39.973 [2024-07-11 16:34:16.747354] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.973 [2024-07-11 16:34:16.749288] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.973 [2024-07-11 16:34:16.749378] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:39.973 pt3 00:18:39.973 16:34:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:39.973 16:34:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:39.973 16:34:16 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:39.973 16:34:16 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:39.973 16:34:16 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:39.973 16:34:16 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:39.973 16:34:16 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:39.973 16:34:16 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:39.973 16:34:16 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:40.231 malloc4 00:18:40.231 16:34:17 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:40.489 [2024-07-11 16:34:17.204436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:40.490 [2024-07-11 16:34:17.204507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.490 [2024-07-11 16:34:17.204545] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:40.490 [2024-07-11 16:34:17.204583] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.490 [2024-07-11 16:34:17.206488] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.490 [2024-07-11 16:34:17.206534] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:40.490 pt4 00:18:40.490 16:34:17 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:40.490 16:34:17 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:40.490 16:34:17 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:40.748 [2024-07-11 16:34:17.380502] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.748 [2024-07-11 16:34:17.382084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.748 [2024-07-11 16:34:17.382154] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:40.748 [2024-07-11 16:34:17.382225] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:40.748 [2024-07-11 16:34:17.382492] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:40.748 [2024-07-11 16:34:17.382518] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:40.748 [2024-07-11 16:34:17.382664] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:40.748 [2024-07-11 16:34:17.383044] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:40.748 [2024-07-11 16:34:17.383068] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:40.748 [2024-07-11 16:34:17.383223] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.748 16:34:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.006 16:34:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:41.006 "name": "raid_bdev1", 00:18:41.006 "uuid": "19b8af7c-1abb-4c93-9adc-9435611cddce", 00:18:41.006 "strip_size_kb": 64, 00:18:41.006 "state": "online", 00:18:41.006 "raid_level": "raid0", 00:18:41.006 "superblock": true, 00:18:41.006 "num_base_bdevs": 4, 00:18:41.006 "num_base_bdevs_discovered": 4, 00:18:41.006 "num_base_bdevs_operational": 4, 00:18:41.006 "base_bdevs_list": [ 00:18:41.006 { 00:18:41.006 "name": "pt1", 00:18:41.006 "uuid": "621520e1-2692-597c-9d70-aaebd99ec8e8", 00:18:41.006 "is_configured": true, 00:18:41.006 "data_offset": 2048, 00:18:41.006 "data_size": 63488 00:18:41.006 }, 00:18:41.006 { 00:18:41.006 "name": "pt2", 00:18:41.006 "uuid": "264d49ff-6e73-55e8-9f33-41cf741c2a08", 00:18:41.006 "is_configured": true, 00:18:41.006 "data_offset": 2048, 00:18:41.006 "data_size": 63488 00:18:41.006 }, 00:18:41.006 { 00:18:41.006 "name": "pt3", 00:18:41.006 "uuid": "88590899-2c2c-5339-8e42-ab70c3583038", 00:18:41.006 "is_configured": true, 00:18:41.006 "data_offset": 2048, 00:18:41.006 "data_size": 63488 00:18:41.006 }, 00:18:41.006 { 00:18:41.006 "name": "pt4", 00:18:41.006 "uuid": "df0456be-3c0b-55c3-b35d-e231c1ba00d1", 00:18:41.006 "is_configured": true, 00:18:41.006 "data_offset": 2048, 00:18:41.006 "data_size": 63488 00:18:41.006 } 00:18:41.006 ] 00:18:41.006 }' 00:18:41.006 16:34:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:41.006 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:18:41.575 16:34:18 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:41.575 16:34:18 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:41.575 [2024-07-11 16:34:18.372836] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:41.833 16:34:18 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=19b8af7c-1abb-4c93-9adc-9435611cddce 00:18:41.833 16:34:18 -- bdev/bdev_raid.sh@380 -- # '[' -z 19b8af7c-1abb-4c93-9adc-9435611cddce ']' 00:18:41.833 16:34:18 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:42.091 [2024-07-11 16:34:18.644644] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:42.091 [2024-07-11 16:34:18.644670] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:42.091 [2024-07-11 16:34:18.644754] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:42.091 [2024-07-11 16:34:18.644821] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:42.091 [2024-07-11 16:34:18.644831] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:42.091 16:34:18 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.091 16:34:18 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:42.091 16:34:18 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:42.091 16:34:18 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:42.091 16:34:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:42.091 16:34:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:42.349 16:34:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:42.349 16:34:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:42.607 16:34:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:42.607 16:34:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:42.607 16:34:19 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:42.607 16:34:19 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:42.866 16:34:19 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:42.866 16:34:19 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:43.125 16:34:19 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:43.125 16:34:19 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:43.125 16:34:19 -- common/autotest_common.sh@640 -- # local es=0 00:18:43.125 16:34:19 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:43.125 16:34:19 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.125 16:34:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:43.125 16:34:19 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.125 16:34:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:43.125 16:34:19 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.125 16:34:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:43.125 16:34:19 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.125 16:34:19 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:43.125 16:34:19 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:43.384 [2024-07-11 16:34:20.060842] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:43.384 [2024-07-11 16:34:20.062686] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:43.384 [2024-07-11 16:34:20.062746] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:43.384 [2024-07-11 16:34:20.062792] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:43.384 [2024-07-11 16:34:20.062846] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:43.384 [2024-07-11 16:34:20.062912] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:43.384 [2024-07-11 16:34:20.062998] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:43.384 [2024-07-11 16:34:20.063057] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:43.384 [2024-07-11 16:34:20.063084] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.384 [2024-07-11 16:34:20.063095] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:18:43.384 request: 00:18:43.384 { 00:18:43.384 "name": "raid_bdev1", 00:18:43.384 "raid_level": "raid0", 00:18:43.384 "base_bdevs": [ 00:18:43.384 "malloc1", 00:18:43.384 "malloc2", 00:18:43.384 "malloc3", 00:18:43.384 "malloc4" 00:18:43.384 ], 00:18:43.384 "superblock": false, 00:18:43.384 "strip_size_kb": 64, 00:18:43.384 "method": "bdev_raid_create", 00:18:43.384 "req_id": 1 00:18:43.384 } 00:18:43.384 Got JSON-RPC error response 00:18:43.384 response: 00:18:43.384 { 00:18:43.384 "code": -17, 00:18:43.384 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:43.384 } 00:18:43.384 16:34:20 -- common/autotest_common.sh@643 -- # es=1 00:18:43.384 16:34:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:43.384 16:34:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:43.384 16:34:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:43.384 16:34:20 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:43.384 16:34:20 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:43.643 [2024-07-11 16:34:20.432860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:43.643 [2024-07-11 16:34:20.432939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.643 [2024-07-11 16:34:20.432981] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:43.643 [2024-07-11 16:34:20.433006] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.643 [2024-07-11 16:34:20.434979] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.643 [2024-07-11 16:34:20.435055] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:43.643 [2024-07-11 16:34:20.435145] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:43.643 [2024-07-11 16:34:20.435207] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:43.643 pt1 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.643 16:34:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.901 16:34:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.901 "name": "raid_bdev1", 00:18:43.901 "uuid": "19b8af7c-1abb-4c93-9adc-9435611cddce", 00:18:43.901 "strip_size_kb": 64, 00:18:43.901 "state": "configuring", 00:18:43.901 "raid_level": "raid0", 00:18:43.901 "superblock": true, 00:18:43.901 "num_base_bdevs": 4, 00:18:43.901 "num_base_bdevs_discovered": 1, 00:18:43.901 "num_base_bdevs_operational": 4, 00:18:43.901 "base_bdevs_list": [ 00:18:43.901 { 00:18:43.901 "name": "pt1", 00:18:43.901 "uuid": "621520e1-2692-597c-9d70-aaebd99ec8e8", 00:18:43.901 "is_configured": true, 00:18:43.901 "data_offset": 2048, 00:18:43.901 "data_size": 63488 00:18:43.901 }, 00:18:43.901 { 00:18:43.901 "name": null, 00:18:43.901 "uuid": "264d49ff-6e73-55e8-9f33-41cf741c2a08", 00:18:43.901 "is_configured": false, 00:18:43.901 "data_offset": 2048, 00:18:43.901 "data_size": 63488 00:18:43.901 }, 00:18:43.901 { 00:18:43.901 "name": null, 00:18:43.901 "uuid": "88590899-2c2c-5339-8e42-ab70c3583038", 00:18:43.901 "is_configured": false, 00:18:43.901 "data_offset": 2048, 00:18:43.901 "data_size": 63488 00:18:43.901 }, 00:18:43.901 { 00:18:43.901 "name": null, 00:18:43.901 "uuid": "df0456be-3c0b-55c3-b35d-e231c1ba00d1", 00:18:43.901 "is_configured": false, 00:18:43.901 "data_offset": 2048, 00:18:43.901 "data_size": 63488 00:18:43.901 } 00:18:43.901 ] 00:18:43.901 }' 00:18:43.901 16:34:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.901 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:18:44.838 16:34:21 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:44.838 16:34:21 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:44.838 [2024-07-11 16:34:21.549167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:44.838 [2024-07-11 16:34:21.549260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.838 [2024-07-11 16:34:21.549302] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:44.838 [2024-07-11 16:34:21.549340] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.838 [2024-07-11 16:34:21.549930] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.838 [2024-07-11 16:34:21.550037] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:44.838 [2024-07-11 16:34:21.550138] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:44.838 [2024-07-11 16:34:21.550182] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:44.838 pt2 00:18:44.838 16:34:21 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:45.097 [2024-07-11 16:34:21.789169] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.097 16:34:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.356 16:34:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.356 "name": "raid_bdev1", 00:18:45.356 "uuid": "19b8af7c-1abb-4c93-9adc-9435611cddce", 00:18:45.356 "strip_size_kb": 64, 00:18:45.356 "state": "configuring", 00:18:45.356 "raid_level": "raid0", 00:18:45.356 "superblock": true, 00:18:45.356 "num_base_bdevs": 4, 00:18:45.356 "num_base_bdevs_discovered": 1, 00:18:45.356 "num_base_bdevs_operational": 4, 00:18:45.356 "base_bdevs_list": [ 00:18:45.356 { 00:18:45.356 "name": "pt1", 00:18:45.356 "uuid": "621520e1-2692-597c-9d70-aaebd99ec8e8", 00:18:45.356 "is_configured": true, 00:18:45.356 "data_offset": 2048, 00:18:45.356 "data_size": 63488 00:18:45.356 }, 00:18:45.356 { 00:18:45.356 "name": null, 00:18:45.356 "uuid": "264d49ff-6e73-55e8-9f33-41cf741c2a08", 00:18:45.356 "is_configured": false, 00:18:45.356 "data_offset": 2048, 00:18:45.356 "data_size": 63488 00:18:45.356 }, 00:18:45.356 { 00:18:45.356 "name": null, 00:18:45.356 "uuid": "88590899-2c2c-5339-8e42-ab70c3583038", 00:18:45.356 "is_configured": false, 00:18:45.356 "data_offset": 2048, 00:18:45.356 "data_size": 63488 00:18:45.356 }, 00:18:45.356 { 00:18:45.357 "name": null, 00:18:45.357 "uuid": "df0456be-3c0b-55c3-b35d-e231c1ba00d1", 00:18:45.357 "is_configured": false, 00:18:45.357 "data_offset": 2048, 00:18:45.357 "data_size": 63488 00:18:45.357 } 00:18:45.357 ] 00:18:45.357 }' 00:18:45.357 16:34:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.357 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:18:45.924 16:34:22 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:45.924 16:34:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:45.924 16:34:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:46.182 [2024-07-11 16:34:22.849498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:46.182 [2024-07-11 16:34:22.849592] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.182 [2024-07-11 16:34:22.849632] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:46.182 [2024-07-11 16:34:22.849655] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.182 [2024-07-11 16:34:22.850158] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.182 [2024-07-11 16:34:22.850233] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:46.182 [2024-07-11 16:34:22.850337] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:46.182 [2024-07-11 16:34:22.850398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:46.182 pt2 00:18:46.182 16:34:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:46.182 16:34:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:46.182 16:34:22 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:46.439 [2024-07-11 16:34:23.109493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:46.439 [2024-07-11 16:34:23.109569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.439 [2024-07-11 16:34:23.109596] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:46.439 [2024-07-11 16:34:23.109620] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.439 [2024-07-11 16:34:23.110041] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.439 [2024-07-11 16:34:23.110102] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:46.439 [2024-07-11 16:34:23.110185] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:46.439 [2024-07-11 16:34:23.110210] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:46.439 pt3 00:18:46.439 16:34:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:46.439 16:34:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:46.439 16:34:23 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:46.697 [2024-07-11 16:34:23.285546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:46.697 [2024-07-11 16:34:23.285626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.697 [2024-07-11 16:34:23.285658] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:46.697 [2024-07-11 16:34:23.285681] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.697 [2024-07-11 16:34:23.286076] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.697 [2024-07-11 16:34:23.286134] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:46.697 [2024-07-11 16:34:23.286221] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:46.697 [2024-07-11 16:34:23.286247] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:46.697 [2024-07-11 16:34:23.286376] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:18:46.697 [2024-07-11 16:34:23.286390] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:46.697 [2024-07-11 16:34:23.286486] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:46.697 [2024-07-11 16:34:23.286798] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:18:46.697 [2024-07-11 16:34:23.286823] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:18:46.697 [2024-07-11 16:34:23.286949] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:46.697 pt4 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.697 16:34:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.955 16:34:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.955 "name": "raid_bdev1", 00:18:46.955 "uuid": "19b8af7c-1abb-4c93-9adc-9435611cddce", 00:18:46.955 "strip_size_kb": 64, 00:18:46.955 "state": "online", 00:18:46.955 "raid_level": "raid0", 00:18:46.955 "superblock": true, 00:18:46.955 "num_base_bdevs": 4, 00:18:46.955 "num_base_bdevs_discovered": 4, 00:18:46.955 "num_base_bdevs_operational": 4, 00:18:46.955 "base_bdevs_list": [ 00:18:46.955 { 00:18:46.955 "name": "pt1", 00:18:46.955 "uuid": "621520e1-2692-597c-9d70-aaebd99ec8e8", 00:18:46.955 "is_configured": true, 00:18:46.955 "data_offset": 2048, 00:18:46.955 "data_size": 63488 00:18:46.955 }, 00:18:46.955 { 00:18:46.955 "name": "pt2", 00:18:46.955 "uuid": "264d49ff-6e73-55e8-9f33-41cf741c2a08", 00:18:46.955 "is_configured": true, 00:18:46.955 "data_offset": 2048, 00:18:46.955 "data_size": 63488 00:18:46.955 }, 00:18:46.955 { 00:18:46.955 "name": "pt3", 00:18:46.955 "uuid": "88590899-2c2c-5339-8e42-ab70c3583038", 00:18:46.955 "is_configured": true, 00:18:46.955 "data_offset": 2048, 00:18:46.955 "data_size": 63488 00:18:46.955 }, 00:18:46.955 { 00:18:46.955 "name": "pt4", 00:18:46.955 "uuid": "df0456be-3c0b-55c3-b35d-e231c1ba00d1", 00:18:46.955 "is_configured": true, 00:18:46.955 "data_offset": 2048, 00:18:46.955 "data_size": 63488 00:18:46.955 } 00:18:46.955 ] 00:18:46.955 }' 00:18:46.955 16:34:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.955 16:34:23 -- common/autotest_common.sh@10 -- # set +x 00:18:47.523 16:34:24 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:47.523 16:34:24 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:47.523 [2024-07-11 16:34:24.289942] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.523 16:34:24 -- bdev/bdev_raid.sh@430 -- # '[' 19b8af7c-1abb-4c93-9adc-9435611cddce '!=' 19b8af7c-1abb-4c93-9adc-9435611cddce ']' 00:18:47.523 16:34:24 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:47.523 16:34:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:47.523 16:34:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:47.523 16:34:24 -- bdev/bdev_raid.sh@511 -- # killprocess 122285 00:18:47.523 16:34:24 -- common/autotest_common.sh@926 -- # '[' -z 122285 ']' 00:18:47.523 16:34:24 -- common/autotest_common.sh@930 -- # kill -0 122285 00:18:47.523 16:34:24 -- common/autotest_common.sh@931 -- # uname 00:18:47.523 16:34:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:47.523 16:34:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122285 00:18:47.523 killing process with pid 122285 00:18:47.523 16:34:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:47.523 16:34:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:47.523 16:34:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122285' 00:18:47.523 16:34:24 -- common/autotest_common.sh@945 -- # kill 122285 00:18:47.523 16:34:24 -- common/autotest_common.sh@950 -- # wait 122285 00:18:47.523 [2024-07-11 16:34:24.321694] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.523 [2024-07-11 16:34:24.321844] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.523 [2024-07-11 16:34:24.321926] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.523 [2024-07-11 16:34:24.321947] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:18:47.781 [2024-07-11 16:34:24.573793] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.717 ************************************ 00:18:48.717 END TEST raid_superblock_test 00:18:48.717 ************************************ 00:18:48.717 16:34:25 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:48.717 00:18:48.717 real 0m10.944s 00:18:48.717 user 0m19.234s 00:18:48.717 sys 0m1.199s 00:18:48.717 16:34:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.717 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:18:48.718 16:34:25 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:48.718 16:34:25 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:48.718 16:34:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:18:48.718 16:34:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:48.718 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:18:48.976 ************************************ 00:18:48.976 START TEST raid_state_function_test 00:18:48.976 ************************************ 00:18:48.977 16:34:25 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=122621 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122621' 00:18:48.977 Process raid pid: 122621 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122621 /var/tmp/spdk-raid.sock 00:18:48.977 16:34:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:48.977 16:34:25 -- common/autotest_common.sh@819 -- # '[' -z 122621 ']' 00:18:48.977 16:34:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:48.977 16:34:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:48.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:48.977 16:34:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:48.977 16:34:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:48.977 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:18:48.977 [2024-07-11 16:34:25.607807] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:48.977 [2024-07-11 16:34:25.608634] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.977 [2024-07-11 16:34:25.779289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.236 [2024-07-11 16:34:25.978598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.494 [2024-07-11 16:34:26.143144] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:49.753 16:34:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:49.753 16:34:26 -- common/autotest_common.sh@852 -- # return 0 00:18:49.753 16:34:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:50.012 [2024-07-11 16:34:26.700486] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:50.012 [2024-07-11 16:34:26.700571] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:50.012 [2024-07-11 16:34:26.700584] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.012 [2024-07-11 16:34:26.700605] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.012 [2024-07-11 16:34:26.700612] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.012 [2024-07-11 16:34:26.700647] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.012 [2024-07-11 16:34:26.700656] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:50.012 [2024-07-11 16:34:26.700677] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:50.012 16:34:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:50.012 16:34:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:50.012 16:34:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:50.013 16:34:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:50.013 16:34:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:50.013 16:34:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:50.013 16:34:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.013 16:34:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.013 16:34:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.013 16:34:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.013 16:34:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.013 16:34:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.271 16:34:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.271 "name": "Existed_Raid", 00:18:50.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.271 "strip_size_kb": 64, 00:18:50.271 "state": "configuring", 00:18:50.271 "raid_level": "concat", 00:18:50.271 "superblock": false, 00:18:50.271 "num_base_bdevs": 4, 00:18:50.271 "num_base_bdevs_discovered": 0, 00:18:50.271 "num_base_bdevs_operational": 4, 00:18:50.271 "base_bdevs_list": [ 00:18:50.271 { 00:18:50.271 "name": "BaseBdev1", 00:18:50.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.271 "is_configured": false, 00:18:50.271 "data_offset": 0, 00:18:50.271 "data_size": 0 00:18:50.271 }, 00:18:50.271 { 00:18:50.271 "name": "BaseBdev2", 00:18:50.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.271 "is_configured": false, 00:18:50.271 "data_offset": 0, 00:18:50.271 "data_size": 0 00:18:50.271 }, 00:18:50.271 { 00:18:50.271 "name": "BaseBdev3", 00:18:50.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.271 "is_configured": false, 00:18:50.271 "data_offset": 0, 00:18:50.271 "data_size": 0 00:18:50.271 }, 00:18:50.271 { 00:18:50.271 "name": "BaseBdev4", 00:18:50.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.271 "is_configured": false, 00:18:50.271 "data_offset": 0, 00:18:50.271 "data_size": 0 00:18:50.271 } 00:18:50.271 ] 00:18:50.272 }' 00:18:50.272 16:34:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.272 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:18:50.839 16:34:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:51.098 [2024-07-11 16:34:27.788584] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:51.098 [2024-07-11 16:34:27.788621] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:51.098 16:34:27 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:51.360 [2024-07-11 16:34:27.984671] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:51.360 [2024-07-11 16:34:27.984728] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:51.360 [2024-07-11 16:34:27.984754] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:51.360 [2024-07-11 16:34:27.984794] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:51.360 [2024-07-11 16:34:27.984803] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:51.360 [2024-07-11 16:34:27.984836] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:51.360 [2024-07-11 16:34:27.984844] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:51.360 [2024-07-11 16:34:27.984865] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:51.360 16:34:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:51.622 [2024-07-11 16:34:28.197906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:51.622 BaseBdev1 00:18:51.622 16:34:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:51.622 16:34:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:18:51.622 16:34:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:51.622 16:34:28 -- common/autotest_common.sh@889 -- # local i 00:18:51.622 16:34:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:51.622 16:34:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:51.622 16:34:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:51.622 16:34:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:51.881 [ 00:18:51.881 { 00:18:51.881 "name": "BaseBdev1", 00:18:51.881 "aliases": [ 00:18:51.881 "65c00167-9b93-4b9b-8243-98e3d7a8faec" 00:18:51.881 ], 00:18:51.881 "product_name": "Malloc disk", 00:18:51.881 "block_size": 512, 00:18:51.881 "num_blocks": 65536, 00:18:51.881 "uuid": "65c00167-9b93-4b9b-8243-98e3d7a8faec", 00:18:51.881 "assigned_rate_limits": { 00:18:51.881 "rw_ios_per_sec": 0, 00:18:51.881 "rw_mbytes_per_sec": 0, 00:18:51.881 "r_mbytes_per_sec": 0, 00:18:51.881 "w_mbytes_per_sec": 0 00:18:51.881 }, 00:18:51.881 "claimed": true, 00:18:51.881 "claim_type": "exclusive_write", 00:18:51.881 "zoned": false, 00:18:51.881 "supported_io_types": { 00:18:51.881 "read": true, 00:18:51.881 "write": true, 00:18:51.881 "unmap": true, 00:18:51.881 "write_zeroes": true, 00:18:51.881 "flush": true, 00:18:51.881 "reset": true, 00:18:51.881 "compare": false, 00:18:51.881 "compare_and_write": false, 00:18:51.881 "abort": true, 00:18:51.881 "nvme_admin": false, 00:18:51.881 "nvme_io": false 00:18:51.881 }, 00:18:51.881 "memory_domains": [ 00:18:51.881 { 00:18:51.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.881 "dma_device_type": 2 00:18:51.881 } 00:18:51.881 ], 00:18:51.881 "driver_specific": {} 00:18:51.881 } 00:18:51.881 ] 00:18:51.881 16:34:28 -- common/autotest_common.sh@895 -- # return 0 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.881 16:34:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.139 16:34:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.139 "name": "Existed_Raid", 00:18:52.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.139 "strip_size_kb": 64, 00:18:52.139 "state": "configuring", 00:18:52.139 "raid_level": "concat", 00:18:52.139 "superblock": false, 00:18:52.139 "num_base_bdevs": 4, 00:18:52.139 "num_base_bdevs_discovered": 1, 00:18:52.140 "num_base_bdevs_operational": 4, 00:18:52.140 "base_bdevs_list": [ 00:18:52.140 { 00:18:52.140 "name": "BaseBdev1", 00:18:52.140 "uuid": "65c00167-9b93-4b9b-8243-98e3d7a8faec", 00:18:52.140 "is_configured": true, 00:18:52.140 "data_offset": 0, 00:18:52.140 "data_size": 65536 00:18:52.140 }, 00:18:52.140 { 00:18:52.140 "name": "BaseBdev2", 00:18:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.140 "is_configured": false, 00:18:52.140 "data_offset": 0, 00:18:52.140 "data_size": 0 00:18:52.140 }, 00:18:52.140 { 00:18:52.140 "name": "BaseBdev3", 00:18:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.140 "is_configured": false, 00:18:52.140 "data_offset": 0, 00:18:52.140 "data_size": 0 00:18:52.140 }, 00:18:52.140 { 00:18:52.140 "name": "BaseBdev4", 00:18:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.140 "is_configured": false, 00:18:52.140 "data_offset": 0, 00:18:52.140 "data_size": 0 00:18:52.140 } 00:18:52.140 ] 00:18:52.140 }' 00:18:52.140 16:34:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.140 16:34:28 -- common/autotest_common.sh@10 -- # set +x 00:18:52.707 16:34:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:52.966 [2024-07-11 16:34:29.634169] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:52.966 [2024-07-11 16:34:29.634217] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:52.966 16:34:29 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:52.966 16:34:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:53.225 [2024-07-11 16:34:29.870287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.225 [2024-07-11 16:34:29.871889] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:53.225 [2024-07-11 16:34:29.871965] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:53.225 [2024-07-11 16:34:29.871992] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:53.225 [2024-07-11 16:34:29.872014] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:53.225 [2024-07-11 16:34:29.872022] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:53.225 [2024-07-11 16:34:29.872037] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.225 16:34:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.483 16:34:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.483 "name": "Existed_Raid", 00:18:53.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.483 "strip_size_kb": 64, 00:18:53.483 "state": "configuring", 00:18:53.483 "raid_level": "concat", 00:18:53.483 "superblock": false, 00:18:53.483 "num_base_bdevs": 4, 00:18:53.483 "num_base_bdevs_discovered": 1, 00:18:53.483 "num_base_bdevs_operational": 4, 00:18:53.483 "base_bdevs_list": [ 00:18:53.483 { 00:18:53.483 "name": "BaseBdev1", 00:18:53.483 "uuid": "65c00167-9b93-4b9b-8243-98e3d7a8faec", 00:18:53.483 "is_configured": true, 00:18:53.483 "data_offset": 0, 00:18:53.483 "data_size": 65536 00:18:53.483 }, 00:18:53.483 { 00:18:53.483 "name": "BaseBdev2", 00:18:53.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.483 "is_configured": false, 00:18:53.483 "data_offset": 0, 00:18:53.483 "data_size": 0 00:18:53.483 }, 00:18:53.483 { 00:18:53.483 "name": "BaseBdev3", 00:18:53.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.483 "is_configured": false, 00:18:53.483 "data_offset": 0, 00:18:53.483 "data_size": 0 00:18:53.483 }, 00:18:53.483 { 00:18:53.483 "name": "BaseBdev4", 00:18:53.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.483 "is_configured": false, 00:18:53.483 "data_offset": 0, 00:18:53.483 "data_size": 0 00:18:53.483 } 00:18:53.483 ] 00:18:53.483 }' 00:18:53.483 16:34:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.483 16:34:30 -- common/autotest_common.sh@10 -- # set +x 00:18:54.051 16:34:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:54.309 [2024-07-11 16:34:31.006518] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:54.309 BaseBdev2 00:18:54.309 16:34:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:54.309 16:34:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:18:54.310 16:34:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:54.310 16:34:31 -- common/autotest_common.sh@889 -- # local i 00:18:54.310 16:34:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:54.310 16:34:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:54.310 16:34:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:54.568 16:34:31 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:54.826 [ 00:18:54.826 { 00:18:54.826 "name": "BaseBdev2", 00:18:54.826 "aliases": [ 00:18:54.826 "c2cabf76-d89c-42c4-98ff-e82e92371de9" 00:18:54.826 ], 00:18:54.826 "product_name": "Malloc disk", 00:18:54.826 "block_size": 512, 00:18:54.826 "num_blocks": 65536, 00:18:54.826 "uuid": "c2cabf76-d89c-42c4-98ff-e82e92371de9", 00:18:54.826 "assigned_rate_limits": { 00:18:54.826 "rw_ios_per_sec": 0, 00:18:54.826 "rw_mbytes_per_sec": 0, 00:18:54.826 "r_mbytes_per_sec": 0, 00:18:54.826 "w_mbytes_per_sec": 0 00:18:54.826 }, 00:18:54.826 "claimed": true, 00:18:54.826 "claim_type": "exclusive_write", 00:18:54.826 "zoned": false, 00:18:54.826 "supported_io_types": { 00:18:54.826 "read": true, 00:18:54.826 "write": true, 00:18:54.826 "unmap": true, 00:18:54.826 "write_zeroes": true, 00:18:54.826 "flush": true, 00:18:54.826 "reset": true, 00:18:54.826 "compare": false, 00:18:54.826 "compare_and_write": false, 00:18:54.826 "abort": true, 00:18:54.826 "nvme_admin": false, 00:18:54.826 "nvme_io": false 00:18:54.826 }, 00:18:54.826 "memory_domains": [ 00:18:54.826 { 00:18:54.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.826 "dma_device_type": 2 00:18:54.826 } 00:18:54.826 ], 00:18:54.826 "driver_specific": {} 00:18:54.826 } 00:18:54.826 ] 00:18:54.826 16:34:31 -- common/autotest_common.sh@895 -- # return 0 00:18:54.826 16:34:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:54.826 16:34:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.827 16:34:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.085 16:34:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.085 "name": "Existed_Raid", 00:18:55.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.085 "strip_size_kb": 64, 00:18:55.085 "state": "configuring", 00:18:55.085 "raid_level": "concat", 00:18:55.085 "superblock": false, 00:18:55.085 "num_base_bdevs": 4, 00:18:55.085 "num_base_bdevs_discovered": 2, 00:18:55.085 "num_base_bdevs_operational": 4, 00:18:55.085 "base_bdevs_list": [ 00:18:55.085 { 00:18:55.085 "name": "BaseBdev1", 00:18:55.085 "uuid": "65c00167-9b93-4b9b-8243-98e3d7a8faec", 00:18:55.085 "is_configured": true, 00:18:55.085 "data_offset": 0, 00:18:55.085 "data_size": 65536 00:18:55.085 }, 00:18:55.085 { 00:18:55.085 "name": "BaseBdev2", 00:18:55.085 "uuid": "c2cabf76-d89c-42c4-98ff-e82e92371de9", 00:18:55.085 "is_configured": true, 00:18:55.085 "data_offset": 0, 00:18:55.085 "data_size": 65536 00:18:55.085 }, 00:18:55.085 { 00:18:55.085 "name": "BaseBdev3", 00:18:55.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.085 "is_configured": false, 00:18:55.085 "data_offset": 0, 00:18:55.085 "data_size": 0 00:18:55.085 }, 00:18:55.085 { 00:18:55.085 "name": "BaseBdev4", 00:18:55.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.085 "is_configured": false, 00:18:55.085 "data_offset": 0, 00:18:55.085 "data_size": 0 00:18:55.085 } 00:18:55.085 ] 00:18:55.085 }' 00:18:55.085 16:34:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.085 16:34:31 -- common/autotest_common.sh@10 -- # set +x 00:18:55.651 16:34:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:55.910 [2024-07-11 16:34:32.494195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:55.910 BaseBdev3 00:18:55.910 16:34:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:55.910 16:34:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:18:55.910 16:34:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:55.910 16:34:32 -- common/autotest_common.sh@889 -- # local i 00:18:55.910 16:34:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:55.910 16:34:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:55.910 16:34:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:55.910 16:34:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:56.169 [ 00:18:56.169 { 00:18:56.169 "name": "BaseBdev3", 00:18:56.169 "aliases": [ 00:18:56.169 "434e6321-e5e8-4000-8f0d-1d124446c3ca" 00:18:56.169 ], 00:18:56.169 "product_name": "Malloc disk", 00:18:56.169 "block_size": 512, 00:18:56.169 "num_blocks": 65536, 00:18:56.169 "uuid": "434e6321-e5e8-4000-8f0d-1d124446c3ca", 00:18:56.169 "assigned_rate_limits": { 00:18:56.169 "rw_ios_per_sec": 0, 00:18:56.169 "rw_mbytes_per_sec": 0, 00:18:56.169 "r_mbytes_per_sec": 0, 00:18:56.169 "w_mbytes_per_sec": 0 00:18:56.169 }, 00:18:56.169 "claimed": true, 00:18:56.169 "claim_type": "exclusive_write", 00:18:56.169 "zoned": false, 00:18:56.169 "supported_io_types": { 00:18:56.169 "read": true, 00:18:56.169 "write": true, 00:18:56.169 "unmap": true, 00:18:56.169 "write_zeroes": true, 00:18:56.169 "flush": true, 00:18:56.169 "reset": true, 00:18:56.169 "compare": false, 00:18:56.169 "compare_and_write": false, 00:18:56.169 "abort": true, 00:18:56.169 "nvme_admin": false, 00:18:56.169 "nvme_io": false 00:18:56.169 }, 00:18:56.169 "memory_domains": [ 00:18:56.169 { 00:18:56.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.169 "dma_device_type": 2 00:18:56.169 } 00:18:56.169 ], 00:18:56.169 "driver_specific": {} 00:18:56.169 } 00:18:56.169 ] 00:18:56.169 16:34:32 -- common/autotest_common.sh@895 -- # return 0 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.169 16:34:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.427 16:34:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.427 "name": "Existed_Raid", 00:18:56.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.427 "strip_size_kb": 64, 00:18:56.427 "state": "configuring", 00:18:56.427 "raid_level": "concat", 00:18:56.427 "superblock": false, 00:18:56.427 "num_base_bdevs": 4, 00:18:56.427 "num_base_bdevs_discovered": 3, 00:18:56.427 "num_base_bdevs_operational": 4, 00:18:56.427 "base_bdevs_list": [ 00:18:56.427 { 00:18:56.427 "name": "BaseBdev1", 00:18:56.427 "uuid": "65c00167-9b93-4b9b-8243-98e3d7a8faec", 00:18:56.427 "is_configured": true, 00:18:56.427 "data_offset": 0, 00:18:56.427 "data_size": 65536 00:18:56.427 }, 00:18:56.427 { 00:18:56.427 "name": "BaseBdev2", 00:18:56.427 "uuid": "c2cabf76-d89c-42c4-98ff-e82e92371de9", 00:18:56.427 "is_configured": true, 00:18:56.427 "data_offset": 0, 00:18:56.427 "data_size": 65536 00:18:56.427 }, 00:18:56.427 { 00:18:56.427 "name": "BaseBdev3", 00:18:56.427 "uuid": "434e6321-e5e8-4000-8f0d-1d124446c3ca", 00:18:56.427 "is_configured": true, 00:18:56.427 "data_offset": 0, 00:18:56.427 "data_size": 65536 00:18:56.427 }, 00:18:56.427 { 00:18:56.427 "name": "BaseBdev4", 00:18:56.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.427 "is_configured": false, 00:18:56.427 "data_offset": 0, 00:18:56.427 "data_size": 0 00:18:56.427 } 00:18:56.427 ] 00:18:56.427 }' 00:18:56.427 16:34:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.427 16:34:33 -- common/autotest_common.sh@10 -- # set +x 00:18:56.993 16:34:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:57.251 [2024-07-11 16:34:33.978124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:57.251 [2024-07-11 16:34:33.978191] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:57.251 [2024-07-11 16:34:33.978201] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:57.251 [2024-07-11 16:34:33.978344] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:57.251 [2024-07-11 16:34:33.978700] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:57.251 [2024-07-11 16:34:33.978725] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:57.251 [2024-07-11 16:34:33.979007] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:57.251 BaseBdev4 00:18:57.251 16:34:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:57.251 16:34:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:18:57.251 16:34:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:18:57.251 16:34:33 -- common/autotest_common.sh@889 -- # local i 00:18:57.251 16:34:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:18:57.251 16:34:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:18:57.251 16:34:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:57.510 16:34:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:57.768 [ 00:18:57.768 { 00:18:57.768 "name": "BaseBdev4", 00:18:57.768 "aliases": [ 00:18:57.768 "5e87b4cd-abed-4ec0-b0bd-570d6bc2a5c7" 00:18:57.768 ], 00:18:57.768 "product_name": "Malloc disk", 00:18:57.768 "block_size": 512, 00:18:57.768 "num_blocks": 65536, 00:18:57.768 "uuid": "5e87b4cd-abed-4ec0-b0bd-570d6bc2a5c7", 00:18:57.768 "assigned_rate_limits": { 00:18:57.768 "rw_ios_per_sec": 0, 00:18:57.768 "rw_mbytes_per_sec": 0, 00:18:57.768 "r_mbytes_per_sec": 0, 00:18:57.768 "w_mbytes_per_sec": 0 00:18:57.768 }, 00:18:57.768 "claimed": true, 00:18:57.768 "claim_type": "exclusive_write", 00:18:57.768 "zoned": false, 00:18:57.768 "supported_io_types": { 00:18:57.768 "read": true, 00:18:57.768 "write": true, 00:18:57.768 "unmap": true, 00:18:57.768 "write_zeroes": true, 00:18:57.768 "flush": true, 00:18:57.768 "reset": true, 00:18:57.768 "compare": false, 00:18:57.768 "compare_and_write": false, 00:18:57.768 "abort": true, 00:18:57.768 "nvme_admin": false, 00:18:57.768 "nvme_io": false 00:18:57.768 }, 00:18:57.768 "memory_domains": [ 00:18:57.768 { 00:18:57.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.768 "dma_device_type": 2 00:18:57.768 } 00:18:57.768 ], 00:18:57.768 "driver_specific": {} 00:18:57.768 } 00:18:57.768 ] 00:18:57.768 16:34:34 -- common/autotest_common.sh@895 -- # return 0 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.768 16:34:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.026 16:34:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:58.026 "name": "Existed_Raid", 00:18:58.026 "uuid": "8c07d0ae-bbc0-4aca-94be-b0ed19e5f451", 00:18:58.026 "strip_size_kb": 64, 00:18:58.026 "state": "online", 00:18:58.026 "raid_level": "concat", 00:18:58.026 "superblock": false, 00:18:58.026 "num_base_bdevs": 4, 00:18:58.026 "num_base_bdevs_discovered": 4, 00:18:58.026 "num_base_bdevs_operational": 4, 00:18:58.026 "base_bdevs_list": [ 00:18:58.026 { 00:18:58.026 "name": "BaseBdev1", 00:18:58.026 "uuid": "65c00167-9b93-4b9b-8243-98e3d7a8faec", 00:18:58.026 "is_configured": true, 00:18:58.026 "data_offset": 0, 00:18:58.026 "data_size": 65536 00:18:58.026 }, 00:18:58.026 { 00:18:58.026 "name": "BaseBdev2", 00:18:58.027 "uuid": "c2cabf76-d89c-42c4-98ff-e82e92371de9", 00:18:58.027 "is_configured": true, 00:18:58.027 "data_offset": 0, 00:18:58.027 "data_size": 65536 00:18:58.027 }, 00:18:58.027 { 00:18:58.027 "name": "BaseBdev3", 00:18:58.027 "uuid": "434e6321-e5e8-4000-8f0d-1d124446c3ca", 00:18:58.027 "is_configured": true, 00:18:58.027 "data_offset": 0, 00:18:58.027 "data_size": 65536 00:18:58.027 }, 00:18:58.027 { 00:18:58.027 "name": "BaseBdev4", 00:18:58.027 "uuid": "5e87b4cd-abed-4ec0-b0bd-570d6bc2a5c7", 00:18:58.027 "is_configured": true, 00:18:58.027 "data_offset": 0, 00:18:58.027 "data_size": 65536 00:18:58.027 } 00:18:58.027 ] 00:18:58.027 }' 00:18:58.027 16:34:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:58.027 16:34:34 -- common/autotest_common.sh@10 -- # set +x 00:18:58.594 16:34:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:58.853 [2024-07-11 16:34:35.406426] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:58.853 [2024-07-11 16:34:35.406459] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.853 [2024-07-11 16:34:35.406533] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.853 16:34:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.112 16:34:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:59.112 "name": "Existed_Raid", 00:18:59.112 "uuid": "8c07d0ae-bbc0-4aca-94be-b0ed19e5f451", 00:18:59.112 "strip_size_kb": 64, 00:18:59.112 "state": "offline", 00:18:59.112 "raid_level": "concat", 00:18:59.112 "superblock": false, 00:18:59.112 "num_base_bdevs": 4, 00:18:59.112 "num_base_bdevs_discovered": 3, 00:18:59.112 "num_base_bdevs_operational": 3, 00:18:59.112 "base_bdevs_list": [ 00:18:59.112 { 00:18:59.112 "name": null, 00:18:59.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.112 "is_configured": false, 00:18:59.112 "data_offset": 0, 00:18:59.112 "data_size": 65536 00:18:59.112 }, 00:18:59.112 { 00:18:59.112 "name": "BaseBdev2", 00:18:59.112 "uuid": "c2cabf76-d89c-42c4-98ff-e82e92371de9", 00:18:59.112 "is_configured": true, 00:18:59.112 "data_offset": 0, 00:18:59.112 "data_size": 65536 00:18:59.112 }, 00:18:59.112 { 00:18:59.112 "name": "BaseBdev3", 00:18:59.112 "uuid": "434e6321-e5e8-4000-8f0d-1d124446c3ca", 00:18:59.112 "is_configured": true, 00:18:59.112 "data_offset": 0, 00:18:59.112 "data_size": 65536 00:18:59.112 }, 00:18:59.112 { 00:18:59.112 "name": "BaseBdev4", 00:18:59.112 "uuid": "5e87b4cd-abed-4ec0-b0bd-570d6bc2a5c7", 00:18:59.112 "is_configured": true, 00:18:59.112 "data_offset": 0, 00:18:59.112 "data_size": 65536 00:18:59.112 } 00:18:59.112 ] 00:18:59.112 }' 00:18:59.112 16:34:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:59.112 16:34:35 -- common/autotest_common.sh@10 -- # set +x 00:18:59.678 16:34:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:59.678 16:34:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:59.678 16:34:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.678 16:34:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:59.937 16:34:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:59.937 16:34:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:59.937 16:34:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:59.937 [2024-07-11 16:34:36.733489] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:00.195 16:34:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:00.195 16:34:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:00.195 16:34:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.195 16:34:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:00.453 16:34:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:00.453 16:34:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:00.453 16:34:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:00.453 [2024-07-11 16:34:37.256514] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:00.711 16:34:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:00.711 16:34:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:00.711 16:34:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.711 16:34:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:00.711 16:34:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:00.711 16:34:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:00.711 16:34:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:00.969 [2024-07-11 16:34:37.683872] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:00.969 [2024-07-11 16:34:37.683948] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:00.969 16:34:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:00.969 16:34:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:00.969 16:34:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.969 16:34:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:01.227 16:34:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:01.227 16:34:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:01.227 16:34:38 -- bdev/bdev_raid.sh@287 -- # killprocess 122621 00:19:01.227 16:34:38 -- common/autotest_common.sh@926 -- # '[' -z 122621 ']' 00:19:01.227 16:34:38 -- common/autotest_common.sh@930 -- # kill -0 122621 00:19:01.227 16:34:38 -- common/autotest_common.sh@931 -- # uname 00:19:01.227 16:34:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:01.227 16:34:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 122621 00:19:01.485 killing process with pid 122621 00:19:01.485 16:34:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:01.485 16:34:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:01.485 16:34:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 122621' 00:19:01.485 16:34:38 -- common/autotest_common.sh@945 -- # kill 122621 00:19:01.485 16:34:38 -- common/autotest_common.sh@950 -- # wait 122621 00:19:01.485 [2024-07-11 16:34:38.037919] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:01.485 [2024-07-11 16:34:38.038091] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:02.420 ************************************ 00:19:02.420 END TEST raid_state_function_test 00:19:02.420 ************************************ 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:02.420 00:19:02.420 real 0m13.501s 00:19:02.420 user 0m24.251s 00:19:02.420 sys 0m1.509s 00:19:02.420 16:34:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:02.420 16:34:39 -- common/autotest_common.sh@10 -- # set +x 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:19:02.420 16:34:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:02.420 16:34:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:02.420 16:34:39 -- common/autotest_common.sh@10 -- # set +x 00:19:02.420 ************************************ 00:19:02.420 START TEST raid_state_function_test_sb 00:19:02.420 ************************************ 00:19:02.420 16:34:39 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=123099 00:19:02.420 Process raid pid: 123099 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123099' 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123099 /var/tmp/spdk-raid.sock 00:19:02.420 16:34:39 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:02.420 16:34:39 -- common/autotest_common.sh@819 -- # '[' -z 123099 ']' 00:19:02.420 16:34:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:02.420 16:34:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:02.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:02.420 16:34:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:02.420 16:34:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:02.420 16:34:39 -- common/autotest_common.sh@10 -- # set +x 00:19:02.420 [2024-07-11 16:34:39.150556] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:02.420 [2024-07-11 16:34:39.150759] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.678 [2024-07-11 16:34:39.319243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.937 [2024-07-11 16:34:39.513614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.937 [2024-07-11 16:34:39.678779] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:03.502 16:34:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:03.502 16:34:40 -- common/autotest_common.sh@852 -- # return 0 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:03.502 [2024-07-11 16:34:40.199662] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:03.502 [2024-07-11 16:34:40.199746] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:03.502 [2024-07-11 16:34:40.199759] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:03.502 [2024-07-11 16:34:40.199780] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:03.502 [2024-07-11 16:34:40.199787] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:03.502 [2024-07-11 16:34:40.199820] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:03.502 [2024-07-11 16:34:40.199829] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:03.502 [2024-07-11 16:34:40.199847] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.502 16:34:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.761 16:34:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.761 "name": "Existed_Raid", 00:19:03.761 "uuid": "5869dc23-89b5-4f24-ba2b-755700c34063", 00:19:03.761 "strip_size_kb": 64, 00:19:03.761 "state": "configuring", 00:19:03.761 "raid_level": "concat", 00:19:03.761 "superblock": true, 00:19:03.761 "num_base_bdevs": 4, 00:19:03.761 "num_base_bdevs_discovered": 0, 00:19:03.761 "num_base_bdevs_operational": 4, 00:19:03.761 "base_bdevs_list": [ 00:19:03.761 { 00:19:03.761 "name": "BaseBdev1", 00:19:03.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.761 "is_configured": false, 00:19:03.761 "data_offset": 0, 00:19:03.761 "data_size": 0 00:19:03.761 }, 00:19:03.761 { 00:19:03.761 "name": "BaseBdev2", 00:19:03.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.761 "is_configured": false, 00:19:03.761 "data_offset": 0, 00:19:03.761 "data_size": 0 00:19:03.761 }, 00:19:03.761 { 00:19:03.761 "name": "BaseBdev3", 00:19:03.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.761 "is_configured": false, 00:19:03.761 "data_offset": 0, 00:19:03.761 "data_size": 0 00:19:03.761 }, 00:19:03.761 { 00:19:03.761 "name": "BaseBdev4", 00:19:03.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.761 "is_configured": false, 00:19:03.761 "data_offset": 0, 00:19:03.761 "data_size": 0 00:19:03.761 } 00:19:03.761 ] 00:19:03.761 }' 00:19:03.761 16:34:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.761 16:34:40 -- common/autotest_common.sh@10 -- # set +x 00:19:04.327 16:34:41 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:04.584 [2024-07-11 16:34:41.239685] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:04.584 [2024-07-11 16:34:41.239735] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:04.584 16:34:41 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:04.842 [2024-07-11 16:34:41.487796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:04.842 [2024-07-11 16:34:41.487846] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:04.842 [2024-07-11 16:34:41.487874] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:04.842 [2024-07-11 16:34:41.487902] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:04.842 [2024-07-11 16:34:41.487910] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:04.842 [2024-07-11 16:34:41.487941] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:04.842 [2024-07-11 16:34:41.487949] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:04.842 [2024-07-11 16:34:41.487970] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:04.842 16:34:41 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:05.099 [2024-07-11 16:34:41.752974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.099 BaseBdev1 00:19:05.099 16:34:41 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:05.099 16:34:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:05.099 16:34:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:05.099 16:34:41 -- common/autotest_common.sh@889 -- # local i 00:19:05.099 16:34:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:05.099 16:34:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:05.099 16:34:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:05.356 16:34:41 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:05.356 [ 00:19:05.356 { 00:19:05.356 "name": "BaseBdev1", 00:19:05.356 "aliases": [ 00:19:05.356 "23af017c-9d5d-4067-8981-83156d67197d" 00:19:05.356 ], 00:19:05.356 "product_name": "Malloc disk", 00:19:05.356 "block_size": 512, 00:19:05.356 "num_blocks": 65536, 00:19:05.356 "uuid": "23af017c-9d5d-4067-8981-83156d67197d", 00:19:05.356 "assigned_rate_limits": { 00:19:05.356 "rw_ios_per_sec": 0, 00:19:05.356 "rw_mbytes_per_sec": 0, 00:19:05.356 "r_mbytes_per_sec": 0, 00:19:05.356 "w_mbytes_per_sec": 0 00:19:05.356 }, 00:19:05.356 "claimed": true, 00:19:05.356 "claim_type": "exclusive_write", 00:19:05.356 "zoned": false, 00:19:05.356 "supported_io_types": { 00:19:05.356 "read": true, 00:19:05.356 "write": true, 00:19:05.356 "unmap": true, 00:19:05.356 "write_zeroes": true, 00:19:05.356 "flush": true, 00:19:05.356 "reset": true, 00:19:05.356 "compare": false, 00:19:05.356 "compare_and_write": false, 00:19:05.356 "abort": true, 00:19:05.356 "nvme_admin": false, 00:19:05.356 "nvme_io": false 00:19:05.356 }, 00:19:05.356 "memory_domains": [ 00:19:05.356 { 00:19:05.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.356 "dma_device_type": 2 00:19:05.356 } 00:19:05.356 ], 00:19:05.356 "driver_specific": {} 00:19:05.356 } 00:19:05.356 ] 00:19:05.356 16:34:42 -- common/autotest_common.sh@895 -- # return 0 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.356 16:34:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.617 16:34:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:05.617 "name": "Existed_Raid", 00:19:05.617 "uuid": "8e4227f6-d091-43b2-928a-08155e9dbf77", 00:19:05.617 "strip_size_kb": 64, 00:19:05.617 "state": "configuring", 00:19:05.617 "raid_level": "concat", 00:19:05.617 "superblock": true, 00:19:05.617 "num_base_bdevs": 4, 00:19:05.617 "num_base_bdevs_discovered": 1, 00:19:05.617 "num_base_bdevs_operational": 4, 00:19:05.617 "base_bdevs_list": [ 00:19:05.617 { 00:19:05.617 "name": "BaseBdev1", 00:19:05.617 "uuid": "23af017c-9d5d-4067-8981-83156d67197d", 00:19:05.617 "is_configured": true, 00:19:05.617 "data_offset": 2048, 00:19:05.617 "data_size": 63488 00:19:05.617 }, 00:19:05.617 { 00:19:05.617 "name": "BaseBdev2", 00:19:05.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.617 "is_configured": false, 00:19:05.617 "data_offset": 0, 00:19:05.617 "data_size": 0 00:19:05.617 }, 00:19:05.617 { 00:19:05.617 "name": "BaseBdev3", 00:19:05.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.617 "is_configured": false, 00:19:05.617 "data_offset": 0, 00:19:05.617 "data_size": 0 00:19:05.617 }, 00:19:05.617 { 00:19:05.617 "name": "BaseBdev4", 00:19:05.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.617 "is_configured": false, 00:19:05.617 "data_offset": 0, 00:19:05.617 "data_size": 0 00:19:05.617 } 00:19:05.617 ] 00:19:05.617 }' 00:19:05.617 16:34:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:05.618 16:34:42 -- common/autotest_common.sh@10 -- # set +x 00:19:06.197 16:34:42 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:06.455 [2024-07-11 16:34:43.195312] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:06.455 [2024-07-11 16:34:43.195361] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:06.455 16:34:43 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:06.455 16:34:43 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:06.712 16:34:43 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:06.970 BaseBdev1 00:19:06.970 16:34:43 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:06.970 16:34:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:06.970 16:34:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:06.970 16:34:43 -- common/autotest_common.sh@889 -- # local i 00:19:06.970 16:34:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:06.970 16:34:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:06.970 16:34:43 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:07.228 16:34:43 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:07.228 [ 00:19:07.228 { 00:19:07.228 "name": "BaseBdev1", 00:19:07.228 "aliases": [ 00:19:07.228 "d03b5336-3032-4bfa-af49-3848bb4e972b" 00:19:07.228 ], 00:19:07.228 "product_name": "Malloc disk", 00:19:07.228 "block_size": 512, 00:19:07.228 "num_blocks": 65536, 00:19:07.228 "uuid": "d03b5336-3032-4bfa-af49-3848bb4e972b", 00:19:07.228 "assigned_rate_limits": { 00:19:07.228 "rw_ios_per_sec": 0, 00:19:07.228 "rw_mbytes_per_sec": 0, 00:19:07.228 "r_mbytes_per_sec": 0, 00:19:07.228 "w_mbytes_per_sec": 0 00:19:07.228 }, 00:19:07.228 "claimed": false, 00:19:07.228 "zoned": false, 00:19:07.228 "supported_io_types": { 00:19:07.228 "read": true, 00:19:07.228 "write": true, 00:19:07.228 "unmap": true, 00:19:07.228 "write_zeroes": true, 00:19:07.228 "flush": true, 00:19:07.228 "reset": true, 00:19:07.228 "compare": false, 00:19:07.228 "compare_and_write": false, 00:19:07.228 "abort": true, 00:19:07.228 "nvme_admin": false, 00:19:07.228 "nvme_io": false 00:19:07.228 }, 00:19:07.228 "memory_domains": [ 00:19:07.228 { 00:19:07.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.228 "dma_device_type": 2 00:19:07.228 } 00:19:07.228 ], 00:19:07.228 "driver_specific": {} 00:19:07.228 } 00:19:07.228 ] 00:19:07.228 16:34:44 -- common/autotest_common.sh@895 -- # return 0 00:19:07.228 16:34:44 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:07.485 [2024-07-11 16:34:44.192879] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.485 [2024-07-11 16:34:44.194590] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:07.485 [2024-07-11 16:34:44.194682] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:07.485 [2024-07-11 16:34:44.194695] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:07.485 [2024-07-11 16:34:44.194718] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:07.485 [2024-07-11 16:34:44.194726] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:07.485 [2024-07-11 16:34:44.194741] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.485 16:34:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.486 16:34:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.486 16:34:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.743 16:34:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:07.743 "name": "Existed_Raid", 00:19:07.743 "uuid": "81186c9f-c275-4a3a-af48-dbfa94880f7c", 00:19:07.743 "strip_size_kb": 64, 00:19:07.743 "state": "configuring", 00:19:07.743 "raid_level": "concat", 00:19:07.743 "superblock": true, 00:19:07.743 "num_base_bdevs": 4, 00:19:07.743 "num_base_bdevs_discovered": 1, 00:19:07.743 "num_base_bdevs_operational": 4, 00:19:07.743 "base_bdevs_list": [ 00:19:07.743 { 00:19:07.743 "name": "BaseBdev1", 00:19:07.743 "uuid": "d03b5336-3032-4bfa-af49-3848bb4e972b", 00:19:07.743 "is_configured": true, 00:19:07.743 "data_offset": 2048, 00:19:07.743 "data_size": 63488 00:19:07.743 }, 00:19:07.743 { 00:19:07.743 "name": "BaseBdev2", 00:19:07.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.743 "is_configured": false, 00:19:07.743 "data_offset": 0, 00:19:07.743 "data_size": 0 00:19:07.743 }, 00:19:07.743 { 00:19:07.743 "name": "BaseBdev3", 00:19:07.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.743 "is_configured": false, 00:19:07.743 "data_offset": 0, 00:19:07.743 "data_size": 0 00:19:07.743 }, 00:19:07.743 { 00:19:07.743 "name": "BaseBdev4", 00:19:07.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.743 "is_configured": false, 00:19:07.743 "data_offset": 0, 00:19:07.743 "data_size": 0 00:19:07.743 } 00:19:07.743 ] 00:19:07.743 }' 00:19:07.743 16:34:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:07.743 16:34:44 -- common/autotest_common.sh@10 -- # set +x 00:19:08.310 16:34:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:08.569 [2024-07-11 16:34:45.329435] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:08.569 BaseBdev2 00:19:08.569 16:34:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:08.569 16:34:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:08.569 16:34:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:08.569 16:34:45 -- common/autotest_common.sh@889 -- # local i 00:19:08.569 16:34:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:08.569 16:34:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:08.569 16:34:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:08.827 16:34:45 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:09.086 [ 00:19:09.086 { 00:19:09.086 "name": "BaseBdev2", 00:19:09.086 "aliases": [ 00:19:09.086 "bdeceb2c-04f8-43da-a8dc-b558e9f4748b" 00:19:09.086 ], 00:19:09.086 "product_name": "Malloc disk", 00:19:09.086 "block_size": 512, 00:19:09.086 "num_blocks": 65536, 00:19:09.086 "uuid": "bdeceb2c-04f8-43da-a8dc-b558e9f4748b", 00:19:09.086 "assigned_rate_limits": { 00:19:09.086 "rw_ios_per_sec": 0, 00:19:09.086 "rw_mbytes_per_sec": 0, 00:19:09.086 "r_mbytes_per_sec": 0, 00:19:09.086 "w_mbytes_per_sec": 0 00:19:09.086 }, 00:19:09.086 "claimed": true, 00:19:09.086 "claim_type": "exclusive_write", 00:19:09.086 "zoned": false, 00:19:09.086 "supported_io_types": { 00:19:09.086 "read": true, 00:19:09.086 "write": true, 00:19:09.086 "unmap": true, 00:19:09.086 "write_zeroes": true, 00:19:09.086 "flush": true, 00:19:09.086 "reset": true, 00:19:09.086 "compare": false, 00:19:09.086 "compare_and_write": false, 00:19:09.086 "abort": true, 00:19:09.086 "nvme_admin": false, 00:19:09.086 "nvme_io": false 00:19:09.086 }, 00:19:09.086 "memory_domains": [ 00:19:09.086 { 00:19:09.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.086 "dma_device_type": 2 00:19:09.086 } 00:19:09.086 ], 00:19:09.086 "driver_specific": {} 00:19:09.086 } 00:19:09.086 ] 00:19:09.086 16:34:45 -- common/autotest_common.sh@895 -- # return 0 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.086 16:34:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.344 16:34:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.344 "name": "Existed_Raid", 00:19:09.344 "uuid": "81186c9f-c275-4a3a-af48-dbfa94880f7c", 00:19:09.344 "strip_size_kb": 64, 00:19:09.344 "state": "configuring", 00:19:09.344 "raid_level": "concat", 00:19:09.344 "superblock": true, 00:19:09.344 "num_base_bdevs": 4, 00:19:09.344 "num_base_bdevs_discovered": 2, 00:19:09.344 "num_base_bdevs_operational": 4, 00:19:09.344 "base_bdevs_list": [ 00:19:09.344 { 00:19:09.344 "name": "BaseBdev1", 00:19:09.344 "uuid": "d03b5336-3032-4bfa-af49-3848bb4e972b", 00:19:09.344 "is_configured": true, 00:19:09.344 "data_offset": 2048, 00:19:09.344 "data_size": 63488 00:19:09.344 }, 00:19:09.344 { 00:19:09.344 "name": "BaseBdev2", 00:19:09.344 "uuid": "bdeceb2c-04f8-43da-a8dc-b558e9f4748b", 00:19:09.344 "is_configured": true, 00:19:09.344 "data_offset": 2048, 00:19:09.344 "data_size": 63488 00:19:09.344 }, 00:19:09.344 { 00:19:09.344 "name": "BaseBdev3", 00:19:09.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.344 "is_configured": false, 00:19:09.344 "data_offset": 0, 00:19:09.344 "data_size": 0 00:19:09.344 }, 00:19:09.344 { 00:19:09.344 "name": "BaseBdev4", 00:19:09.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.344 "is_configured": false, 00:19:09.344 "data_offset": 0, 00:19:09.344 "data_size": 0 00:19:09.344 } 00:19:09.344 ] 00:19:09.344 }' 00:19:09.344 16:34:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.344 16:34:45 -- common/autotest_common.sh@10 -- # set +x 00:19:09.911 16:34:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:10.171 [2024-07-11 16:34:46.897559] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:10.171 BaseBdev3 00:19:10.171 16:34:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:10.171 16:34:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:10.171 16:34:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:10.171 16:34:46 -- common/autotest_common.sh@889 -- # local i 00:19:10.171 16:34:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:10.171 16:34:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:10.171 16:34:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:10.430 16:34:47 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:10.689 [ 00:19:10.689 { 00:19:10.689 "name": "BaseBdev3", 00:19:10.689 "aliases": [ 00:19:10.689 "dae2ac9a-0a3d-421c-9204-660609e18da2" 00:19:10.689 ], 00:19:10.689 "product_name": "Malloc disk", 00:19:10.689 "block_size": 512, 00:19:10.689 "num_blocks": 65536, 00:19:10.689 "uuid": "dae2ac9a-0a3d-421c-9204-660609e18da2", 00:19:10.689 "assigned_rate_limits": { 00:19:10.689 "rw_ios_per_sec": 0, 00:19:10.689 "rw_mbytes_per_sec": 0, 00:19:10.689 "r_mbytes_per_sec": 0, 00:19:10.689 "w_mbytes_per_sec": 0 00:19:10.689 }, 00:19:10.689 "claimed": true, 00:19:10.689 "claim_type": "exclusive_write", 00:19:10.689 "zoned": false, 00:19:10.689 "supported_io_types": { 00:19:10.689 "read": true, 00:19:10.689 "write": true, 00:19:10.689 "unmap": true, 00:19:10.689 "write_zeroes": true, 00:19:10.689 "flush": true, 00:19:10.689 "reset": true, 00:19:10.689 "compare": false, 00:19:10.689 "compare_and_write": false, 00:19:10.689 "abort": true, 00:19:10.689 "nvme_admin": false, 00:19:10.689 "nvme_io": false 00:19:10.689 }, 00:19:10.689 "memory_domains": [ 00:19:10.689 { 00:19:10.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.689 "dma_device_type": 2 00:19:10.689 } 00:19:10.689 ], 00:19:10.689 "driver_specific": {} 00:19:10.689 } 00:19:10.689 ] 00:19:10.689 16:34:47 -- common/autotest_common.sh@895 -- # return 0 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:10.689 16:34:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:10.690 16:34:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.690 16:34:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.948 16:34:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:10.948 "name": "Existed_Raid", 00:19:10.948 "uuid": "81186c9f-c275-4a3a-af48-dbfa94880f7c", 00:19:10.948 "strip_size_kb": 64, 00:19:10.948 "state": "configuring", 00:19:10.948 "raid_level": "concat", 00:19:10.948 "superblock": true, 00:19:10.948 "num_base_bdevs": 4, 00:19:10.948 "num_base_bdevs_discovered": 3, 00:19:10.948 "num_base_bdevs_operational": 4, 00:19:10.948 "base_bdevs_list": [ 00:19:10.948 { 00:19:10.948 "name": "BaseBdev1", 00:19:10.948 "uuid": "d03b5336-3032-4bfa-af49-3848bb4e972b", 00:19:10.948 "is_configured": true, 00:19:10.948 "data_offset": 2048, 00:19:10.948 "data_size": 63488 00:19:10.948 }, 00:19:10.948 { 00:19:10.948 "name": "BaseBdev2", 00:19:10.948 "uuid": "bdeceb2c-04f8-43da-a8dc-b558e9f4748b", 00:19:10.948 "is_configured": true, 00:19:10.948 "data_offset": 2048, 00:19:10.948 "data_size": 63488 00:19:10.948 }, 00:19:10.948 { 00:19:10.948 "name": "BaseBdev3", 00:19:10.948 "uuid": "dae2ac9a-0a3d-421c-9204-660609e18da2", 00:19:10.948 "is_configured": true, 00:19:10.948 "data_offset": 2048, 00:19:10.948 "data_size": 63488 00:19:10.948 }, 00:19:10.948 { 00:19:10.948 "name": "BaseBdev4", 00:19:10.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.948 "is_configured": false, 00:19:10.948 "data_offset": 0, 00:19:10.948 "data_size": 0 00:19:10.948 } 00:19:10.948 ] 00:19:10.948 }' 00:19:10.948 16:34:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:10.948 16:34:47 -- common/autotest_common.sh@10 -- # set +x 00:19:11.514 16:34:48 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:11.772 [2024-07-11 16:34:48.434925] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:11.772 [2024-07-11 16:34:48.435211] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:11.772 [2024-07-11 16:34:48.435238] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:11.772 [2024-07-11 16:34:48.435397] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:11.772 BaseBdev4 00:19:11.772 [2024-07-11 16:34:48.435801] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:11.772 [2024-07-11 16:34:48.435826] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:11.772 [2024-07-11 16:34:48.436028] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.772 16:34:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:11.772 16:34:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:11.772 16:34:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:11.772 16:34:48 -- common/autotest_common.sh@889 -- # local i 00:19:11.772 16:34:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:11.772 16:34:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:11.772 16:34:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.031 16:34:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:12.290 [ 00:19:12.290 { 00:19:12.290 "name": "BaseBdev4", 00:19:12.290 "aliases": [ 00:19:12.290 "704d57ad-6d5b-45d9-a2ca-1c93c1533264" 00:19:12.290 ], 00:19:12.290 "product_name": "Malloc disk", 00:19:12.290 "block_size": 512, 00:19:12.290 "num_blocks": 65536, 00:19:12.290 "uuid": "704d57ad-6d5b-45d9-a2ca-1c93c1533264", 00:19:12.290 "assigned_rate_limits": { 00:19:12.290 "rw_ios_per_sec": 0, 00:19:12.290 "rw_mbytes_per_sec": 0, 00:19:12.290 "r_mbytes_per_sec": 0, 00:19:12.290 "w_mbytes_per_sec": 0 00:19:12.290 }, 00:19:12.290 "claimed": true, 00:19:12.290 "claim_type": "exclusive_write", 00:19:12.290 "zoned": false, 00:19:12.290 "supported_io_types": { 00:19:12.290 "read": true, 00:19:12.290 "write": true, 00:19:12.290 "unmap": true, 00:19:12.290 "write_zeroes": true, 00:19:12.290 "flush": true, 00:19:12.290 "reset": true, 00:19:12.290 "compare": false, 00:19:12.290 "compare_and_write": false, 00:19:12.290 "abort": true, 00:19:12.290 "nvme_admin": false, 00:19:12.290 "nvme_io": false 00:19:12.290 }, 00:19:12.290 "memory_domains": [ 00:19:12.290 { 00:19:12.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.290 "dma_device_type": 2 00:19:12.290 } 00:19:12.290 ], 00:19:12.290 "driver_specific": {} 00:19:12.290 } 00:19:12.290 ] 00:19:12.290 16:34:48 -- common/autotest_common.sh@895 -- # return 0 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.290 16:34:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.290 16:34:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:12.290 "name": "Existed_Raid", 00:19:12.290 "uuid": "81186c9f-c275-4a3a-af48-dbfa94880f7c", 00:19:12.290 "strip_size_kb": 64, 00:19:12.290 "state": "online", 00:19:12.290 "raid_level": "concat", 00:19:12.290 "superblock": true, 00:19:12.290 "num_base_bdevs": 4, 00:19:12.290 "num_base_bdevs_discovered": 4, 00:19:12.290 "num_base_bdevs_operational": 4, 00:19:12.290 "base_bdevs_list": [ 00:19:12.290 { 00:19:12.290 "name": "BaseBdev1", 00:19:12.290 "uuid": "d03b5336-3032-4bfa-af49-3848bb4e972b", 00:19:12.290 "is_configured": true, 00:19:12.290 "data_offset": 2048, 00:19:12.290 "data_size": 63488 00:19:12.290 }, 00:19:12.290 { 00:19:12.290 "name": "BaseBdev2", 00:19:12.290 "uuid": "bdeceb2c-04f8-43da-a8dc-b558e9f4748b", 00:19:12.290 "is_configured": true, 00:19:12.290 "data_offset": 2048, 00:19:12.290 "data_size": 63488 00:19:12.290 }, 00:19:12.290 { 00:19:12.290 "name": "BaseBdev3", 00:19:12.290 "uuid": "dae2ac9a-0a3d-421c-9204-660609e18da2", 00:19:12.290 "is_configured": true, 00:19:12.290 "data_offset": 2048, 00:19:12.290 "data_size": 63488 00:19:12.290 }, 00:19:12.290 { 00:19:12.290 "name": "BaseBdev4", 00:19:12.290 "uuid": "704d57ad-6d5b-45d9-a2ca-1c93c1533264", 00:19:12.290 "is_configured": true, 00:19:12.290 "data_offset": 2048, 00:19:12.290 "data_size": 63488 00:19:12.290 } 00:19:12.290 ] 00:19:12.290 }' 00:19:12.290 16:34:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:12.290 16:34:49 -- common/autotest_common.sh@10 -- # set +x 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:13.226 [2024-07-11 16:34:49.875259] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:13.226 [2024-07-11 16:34:49.875289] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:13.226 [2024-07-11 16:34:49.875347] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.226 16:34:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.485 16:34:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:13.485 "name": "Existed_Raid", 00:19:13.485 "uuid": "81186c9f-c275-4a3a-af48-dbfa94880f7c", 00:19:13.485 "strip_size_kb": 64, 00:19:13.485 "state": "offline", 00:19:13.485 "raid_level": "concat", 00:19:13.485 "superblock": true, 00:19:13.485 "num_base_bdevs": 4, 00:19:13.485 "num_base_bdevs_discovered": 3, 00:19:13.485 "num_base_bdevs_operational": 3, 00:19:13.485 "base_bdevs_list": [ 00:19:13.485 { 00:19:13.485 "name": null, 00:19:13.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.485 "is_configured": false, 00:19:13.485 "data_offset": 2048, 00:19:13.485 "data_size": 63488 00:19:13.485 }, 00:19:13.485 { 00:19:13.485 "name": "BaseBdev2", 00:19:13.485 "uuid": "bdeceb2c-04f8-43da-a8dc-b558e9f4748b", 00:19:13.485 "is_configured": true, 00:19:13.485 "data_offset": 2048, 00:19:13.485 "data_size": 63488 00:19:13.485 }, 00:19:13.485 { 00:19:13.485 "name": "BaseBdev3", 00:19:13.485 "uuid": "dae2ac9a-0a3d-421c-9204-660609e18da2", 00:19:13.485 "is_configured": true, 00:19:13.485 "data_offset": 2048, 00:19:13.485 "data_size": 63488 00:19:13.485 }, 00:19:13.485 { 00:19:13.485 "name": "BaseBdev4", 00:19:13.485 "uuid": "704d57ad-6d5b-45d9-a2ca-1c93c1533264", 00:19:13.485 "is_configured": true, 00:19:13.485 "data_offset": 2048, 00:19:13.485 "data_size": 63488 00:19:13.485 } 00:19:13.485 ] 00:19:13.485 }' 00:19:13.485 16:34:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:13.485 16:34:50 -- common/autotest_common.sh@10 -- # set +x 00:19:14.052 16:34:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:14.053 16:34:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:14.053 16:34:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.053 16:34:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:14.311 16:34:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:14.311 16:34:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:14.311 16:34:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:14.570 [2024-07-11 16:34:51.129333] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:14.570 16:34:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:14.570 16:34:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:14.570 16:34:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.570 16:34:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:14.828 16:34:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:14.828 16:34:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:14.828 16:34:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:15.087 [2024-07-11 16:34:51.688406] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:15.087 16:34:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:15.087 16:34:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:15.087 16:34:51 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.087 16:34:51 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:15.346 16:34:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:15.346 16:34:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:15.346 16:34:51 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:15.603 [2024-07-11 16:34:52.167419] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:15.603 [2024-07-11 16:34:52.167478] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:15.603 16:34:52 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:15.603 16:34:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:15.603 16:34:52 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.603 16:34:52 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:15.861 16:34:52 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:15.861 16:34:52 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:15.861 16:34:52 -- bdev/bdev_raid.sh@287 -- # killprocess 123099 00:19:15.861 16:34:52 -- common/autotest_common.sh@926 -- # '[' -z 123099 ']' 00:19:15.861 16:34:52 -- common/autotest_common.sh@930 -- # kill -0 123099 00:19:15.861 16:34:52 -- common/autotest_common.sh@931 -- # uname 00:19:15.861 16:34:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:15.861 16:34:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123099 00:19:15.861 killing process with pid 123099 00:19:15.861 16:34:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:15.861 16:34:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:15.861 16:34:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123099' 00:19:15.861 16:34:52 -- common/autotest_common.sh@945 -- # kill 123099 00:19:15.861 16:34:52 -- common/autotest_common.sh@950 -- # wait 123099 00:19:15.861 [2024-07-11 16:34:52.454446] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.861 [2024-07-11 16:34:52.454540] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:16.793 ************************************ 00:19:16.793 END TEST raid_state_function_test_sb 00:19:16.793 ************************************ 00:19:16.793 16:34:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:16.793 00:19:16.793 real 0m14.268s 00:19:16.793 user 0m25.834s 00:19:16.793 sys 0m1.475s 00:19:16.793 16:34:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:16.793 16:34:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.793 16:34:53 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:19:16.793 16:34:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:16.793 16:34:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:16.793 16:34:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.793 ************************************ 00:19:16.793 START TEST raid_superblock_test 00:19:16.793 ************************************ 00:19:16.793 16:34:53 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:19:16.793 16:34:53 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:19:16.793 16:34:53 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:16.793 16:34:53 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:16.793 16:34:53 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:16.793 16:34:53 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:16.793 16:34:53 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@357 -- # raid_pid=123558 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123558 /var/tmp/spdk-raid.sock 00:19:16.794 16:34:53 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:16.794 16:34:53 -- common/autotest_common.sh@819 -- # '[' -z 123558 ']' 00:19:16.794 16:34:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:16.794 16:34:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:16.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:16.794 16:34:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:16.794 16:34:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:16.794 16:34:53 -- common/autotest_common.sh@10 -- # set +x 00:19:16.794 [2024-07-11 16:34:53.474909] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:16.794 [2024-07-11 16:34:53.475131] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123558 ] 00:19:17.052 [2024-07-11 16:34:53.636992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.052 [2024-07-11 16:34:53.794960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.309 [2024-07-11 16:34:53.962683] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.568 16:34:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:17.568 16:34:54 -- common/autotest_common.sh@852 -- # return 0 00:19:17.568 16:34:54 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:17.568 16:34:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:17.568 16:34:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:17.568 16:34:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:17.568 16:34:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:17.568 16:34:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:17.568 16:34:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:17.568 16:34:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:17.568 16:34:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:17.827 malloc1 00:19:17.827 16:34:54 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:18.085 [2024-07-11 16:34:54.755426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:18.085 [2024-07-11 16:34:54.755527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.085 [2024-07-11 16:34:54.755560] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:18.085 [2024-07-11 16:34:54.755603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.085 [2024-07-11 16:34:54.757648] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.085 [2024-07-11 16:34:54.757701] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:18.085 pt1 00:19:18.085 16:34:54 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:18.085 16:34:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:18.085 16:34:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:18.085 16:34:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:18.085 16:34:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:18.085 16:34:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.085 16:34:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.085 16:34:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.085 16:34:54 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:18.343 malloc2 00:19:18.343 16:34:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:18.601 [2024-07-11 16:34:55.212313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:18.601 [2024-07-11 16:34:55.212401] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.601 [2024-07-11 16:34:55.212441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:18.601 [2024-07-11 16:34:55.212490] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.601 [2024-07-11 16:34:55.214435] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.601 [2024-07-11 16:34:55.214485] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:18.601 pt2 00:19:18.601 16:34:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:18.601 16:34:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:18.601 16:34:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:18.601 16:34:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:18.601 16:34:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:18.601 16:34:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.601 16:34:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.601 16:34:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.601 16:34:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:18.859 malloc3 00:19:18.859 16:34:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:18.859 [2024-07-11 16:34:55.592969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:18.859 [2024-07-11 16:34:55.593055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.859 [2024-07-11 16:34:55.593093] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:18.859 [2024-07-11 16:34:55.593133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.859 [2024-07-11 16:34:55.595011] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.859 [2024-07-11 16:34:55.595061] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:18.859 pt3 00:19:18.859 16:34:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:18.859 16:34:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:18.859 16:34:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:18.859 16:34:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:18.859 16:34:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:18.859 16:34:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.859 16:34:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.859 16:34:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.859 16:34:55 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:19.119 malloc4 00:19:19.119 16:34:55 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:19.393 [2024-07-11 16:34:55.977201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:19.393 [2024-07-11 16:34:55.977303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.393 [2024-07-11 16:34:55.977352] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:19.393 [2024-07-11 16:34:55.977391] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.393 [2024-07-11 16:34:55.979277] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.393 [2024-07-11 16:34:55.979342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:19.393 pt4 00:19:19.393 16:34:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:19.393 16:34:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:19.393 16:34:55 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:19.393 [2024-07-11 16:34:56.165314] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:19.393 [2024-07-11 16:34:56.166937] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:19.393 [2024-07-11 16:34:56.167023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:19.393 [2024-07-11 16:34:56.167098] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:19.393 [2024-07-11 16:34:56.167327] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:19.393 [2024-07-11 16:34:56.167341] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:19.393 [2024-07-11 16:34:56.167453] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:19.393 [2024-07-11 16:34:56.167797] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:19.393 [2024-07-11 16:34:56.167821] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:19.393 [2024-07-11 16:34:56.167989] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.393 16:34:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.673 16:34:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:19.673 "name": "raid_bdev1", 00:19:19.673 "uuid": "f2ef2bff-be5e-49b6-ad0c-5b2da622c385", 00:19:19.673 "strip_size_kb": 64, 00:19:19.673 "state": "online", 00:19:19.673 "raid_level": "concat", 00:19:19.673 "superblock": true, 00:19:19.673 "num_base_bdevs": 4, 00:19:19.673 "num_base_bdevs_discovered": 4, 00:19:19.673 "num_base_bdevs_operational": 4, 00:19:19.673 "base_bdevs_list": [ 00:19:19.673 { 00:19:19.673 "name": "pt1", 00:19:19.673 "uuid": "136238c6-4642-523c-8b5e-da54575faf47", 00:19:19.673 "is_configured": true, 00:19:19.673 "data_offset": 2048, 00:19:19.673 "data_size": 63488 00:19:19.673 }, 00:19:19.673 { 00:19:19.673 "name": "pt2", 00:19:19.673 "uuid": "d82f3036-c0b8-57b8-9012-f5c17438fbc0", 00:19:19.673 "is_configured": true, 00:19:19.673 "data_offset": 2048, 00:19:19.673 "data_size": 63488 00:19:19.673 }, 00:19:19.673 { 00:19:19.673 "name": "pt3", 00:19:19.673 "uuid": "eb03b9aa-9bb4-5a0a-90af-d855c817985a", 00:19:19.673 "is_configured": true, 00:19:19.673 "data_offset": 2048, 00:19:19.673 "data_size": 63488 00:19:19.673 }, 00:19:19.673 { 00:19:19.673 "name": "pt4", 00:19:19.673 "uuid": "ca7671c1-c1a2-517c-8d73-5f4a0caaa9bf", 00:19:19.673 "is_configured": true, 00:19:19.673 "data_offset": 2048, 00:19:19.673 "data_size": 63488 00:19:19.673 } 00:19:19.673 ] 00:19:19.673 }' 00:19:19.673 16:34:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:19.673 16:34:56 -- common/autotest_common.sh@10 -- # set +x 00:19:20.270 16:34:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:20.270 16:34:57 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:20.528 [2024-07-11 16:34:57.233745] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.528 16:34:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=f2ef2bff-be5e-49b6-ad0c-5b2da622c385 00:19:20.528 16:34:57 -- bdev/bdev_raid.sh@380 -- # '[' -z f2ef2bff-be5e-49b6-ad0c-5b2da622c385 ']' 00:19:20.528 16:34:57 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:20.787 [2024-07-11 16:34:57.465558] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.787 [2024-07-11 16:34:57.465585] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.787 [2024-07-11 16:34:57.465650] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.787 [2024-07-11 16:34:57.465715] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.787 [2024-07-11 16:34:57.465726] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:20.787 16:34:57 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.787 16:34:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:21.044 16:34:57 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:21.044 16:34:57 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:21.044 16:34:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:21.044 16:34:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:21.044 16:34:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:21.044 16:34:57 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:21.302 16:34:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:21.302 16:34:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:21.561 16:34:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:21.561 16:34:58 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:21.819 16:34:58 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:21.819 16:34:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:22.078 16:34:58 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:22.078 16:34:58 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:22.078 16:34:58 -- common/autotest_common.sh@640 -- # local es=0 00:19:22.078 16:34:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:22.078 16:34:58 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.078 16:34:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:22.078 16:34:58 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.078 16:34:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:22.078 16:34:58 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.078 16:34:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:22.078 16:34:58 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.078 16:34:58 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:22.078 16:34:58 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:22.336 [2024-07-11 16:34:58.941836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:22.336 [2024-07-11 16:34:58.943487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:22.336 [2024-07-11 16:34:58.943542] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:22.336 [2024-07-11 16:34:58.943587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:22.336 [2024-07-11 16:34:58.943640] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:22.336 [2024-07-11 16:34:58.943780] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:22.336 [2024-07-11 16:34:58.943826] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:22.336 [2024-07-11 16:34:58.943893] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:22.336 [2024-07-11 16:34:58.943920] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.336 [2024-07-11 16:34:58.943930] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:19:22.336 request: 00:19:22.336 { 00:19:22.336 "name": "raid_bdev1", 00:19:22.336 "raid_level": "concat", 00:19:22.336 "base_bdevs": [ 00:19:22.336 "malloc1", 00:19:22.336 "malloc2", 00:19:22.336 "malloc3", 00:19:22.336 "malloc4" 00:19:22.336 ], 00:19:22.336 "superblock": false, 00:19:22.336 "strip_size_kb": 64, 00:19:22.336 "method": "bdev_raid_create", 00:19:22.336 "req_id": 1 00:19:22.336 } 00:19:22.336 Got JSON-RPC error response 00:19:22.336 response: 00:19:22.336 { 00:19:22.336 "code": -17, 00:19:22.336 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:22.336 } 00:19:22.336 16:34:58 -- common/autotest_common.sh@643 -- # es=1 00:19:22.336 16:34:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:22.336 16:34:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:22.336 16:34:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:22.336 16:34:58 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.336 16:34:58 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:22.336 16:34:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:22.336 16:34:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:22.336 16:34:59 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:22.595 [2024-07-11 16:34:59.301837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:22.595 [2024-07-11 16:34:59.301914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.595 [2024-07-11 16:34:59.301943] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:22.595 [2024-07-11 16:34:59.301966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.595 [2024-07-11 16:34:59.303881] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.595 [2024-07-11 16:34:59.303961] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:22.595 [2024-07-11 16:34:59.304073] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:22.595 [2024-07-11 16:34:59.304173] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:22.595 pt1 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.595 16:34:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.853 16:34:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:22.853 "name": "raid_bdev1", 00:19:22.853 "uuid": "f2ef2bff-be5e-49b6-ad0c-5b2da622c385", 00:19:22.853 "strip_size_kb": 64, 00:19:22.853 "state": "configuring", 00:19:22.853 "raid_level": "concat", 00:19:22.853 "superblock": true, 00:19:22.853 "num_base_bdevs": 4, 00:19:22.853 "num_base_bdevs_discovered": 1, 00:19:22.853 "num_base_bdevs_operational": 4, 00:19:22.853 "base_bdevs_list": [ 00:19:22.853 { 00:19:22.853 "name": "pt1", 00:19:22.853 "uuid": "136238c6-4642-523c-8b5e-da54575faf47", 00:19:22.853 "is_configured": true, 00:19:22.853 "data_offset": 2048, 00:19:22.853 "data_size": 63488 00:19:22.853 }, 00:19:22.853 { 00:19:22.853 "name": null, 00:19:22.853 "uuid": "d82f3036-c0b8-57b8-9012-f5c17438fbc0", 00:19:22.853 "is_configured": false, 00:19:22.853 "data_offset": 2048, 00:19:22.853 "data_size": 63488 00:19:22.853 }, 00:19:22.853 { 00:19:22.853 "name": null, 00:19:22.853 "uuid": "eb03b9aa-9bb4-5a0a-90af-d855c817985a", 00:19:22.853 "is_configured": false, 00:19:22.853 "data_offset": 2048, 00:19:22.853 "data_size": 63488 00:19:22.853 }, 00:19:22.853 { 00:19:22.853 "name": null, 00:19:22.853 "uuid": "ca7671c1-c1a2-517c-8d73-5f4a0caaa9bf", 00:19:22.853 "is_configured": false, 00:19:22.853 "data_offset": 2048, 00:19:22.853 "data_size": 63488 00:19:22.853 } 00:19:22.853 ] 00:19:22.853 }' 00:19:22.853 16:34:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:22.853 16:34:59 -- common/autotest_common.sh@10 -- # set +x 00:19:23.420 16:35:00 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:23.420 16:35:00 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:23.678 [2024-07-11 16:35:00.326078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:23.678 [2024-07-11 16:35:00.326166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.678 [2024-07-11 16:35:00.326206] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:23.678 [2024-07-11 16:35:00.326226] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.678 [2024-07-11 16:35:00.326771] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.678 [2024-07-11 16:35:00.326869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:23.678 [2024-07-11 16:35:00.326967] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:23.678 [2024-07-11 16:35:00.326995] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:23.678 pt2 00:19:23.678 16:35:00 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:23.936 [2024-07-11 16:35:00.506099] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.936 16:35:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.194 16:35:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:24.194 "name": "raid_bdev1", 00:19:24.194 "uuid": "f2ef2bff-be5e-49b6-ad0c-5b2da622c385", 00:19:24.194 "strip_size_kb": 64, 00:19:24.194 "state": "configuring", 00:19:24.194 "raid_level": "concat", 00:19:24.194 "superblock": true, 00:19:24.194 "num_base_bdevs": 4, 00:19:24.194 "num_base_bdevs_discovered": 1, 00:19:24.194 "num_base_bdevs_operational": 4, 00:19:24.194 "base_bdevs_list": [ 00:19:24.194 { 00:19:24.194 "name": "pt1", 00:19:24.194 "uuid": "136238c6-4642-523c-8b5e-da54575faf47", 00:19:24.194 "is_configured": true, 00:19:24.194 "data_offset": 2048, 00:19:24.194 "data_size": 63488 00:19:24.194 }, 00:19:24.194 { 00:19:24.194 "name": null, 00:19:24.194 "uuid": "d82f3036-c0b8-57b8-9012-f5c17438fbc0", 00:19:24.194 "is_configured": false, 00:19:24.194 "data_offset": 2048, 00:19:24.194 "data_size": 63488 00:19:24.194 }, 00:19:24.194 { 00:19:24.194 "name": null, 00:19:24.194 "uuid": "eb03b9aa-9bb4-5a0a-90af-d855c817985a", 00:19:24.194 "is_configured": false, 00:19:24.194 "data_offset": 2048, 00:19:24.194 "data_size": 63488 00:19:24.194 }, 00:19:24.194 { 00:19:24.194 "name": null, 00:19:24.194 "uuid": "ca7671c1-c1a2-517c-8d73-5f4a0caaa9bf", 00:19:24.194 "is_configured": false, 00:19:24.194 "data_offset": 2048, 00:19:24.194 "data_size": 63488 00:19:24.194 } 00:19:24.194 ] 00:19:24.194 }' 00:19:24.194 16:35:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:24.195 16:35:00 -- common/autotest_common.sh@10 -- # set +x 00:19:24.761 16:35:01 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:24.761 16:35:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:24.761 16:35:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:24.761 [2024-07-11 16:35:01.541312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:24.761 [2024-07-11 16:35:01.541402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.761 [2024-07-11 16:35:01.541441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:24.761 [2024-07-11 16:35:01.541461] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.761 [2024-07-11 16:35:01.541963] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.761 [2024-07-11 16:35:01.542044] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:24.761 [2024-07-11 16:35:01.542153] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:24.761 [2024-07-11 16:35:01.542180] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:24.761 pt2 00:19:24.761 16:35:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:24.761 16:35:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:24.761 16:35:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:25.020 [2024-07-11 16:35:01.809358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:25.020 [2024-07-11 16:35:01.809471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.020 [2024-07-11 16:35:01.809501] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:25.020 [2024-07-11 16:35:01.809525] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.020 [2024-07-11 16:35:01.809998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.020 [2024-07-11 16:35:01.810061] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:25.020 [2024-07-11 16:35:01.810175] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:25.020 [2024-07-11 16:35:01.810201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:25.020 pt3 00:19:25.278 16:35:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:25.278 16:35:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:25.278 16:35:01 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:25.278 [2024-07-11 16:35:02.017404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:25.278 [2024-07-11 16:35:02.017492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.278 [2024-07-11 16:35:02.017529] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:25.278 [2024-07-11 16:35:02.017554] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.278 [2024-07-11 16:35:02.017997] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.278 [2024-07-11 16:35:02.018057] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:25.278 [2024-07-11 16:35:02.018179] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:25.278 [2024-07-11 16:35:02.018206] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:25.278 [2024-07-11 16:35:02.018336] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:19:25.278 [2024-07-11 16:35:02.018348] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:19:25.278 [2024-07-11 16:35:02.018447] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:25.278 [2024-07-11 16:35:02.018774] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:19:25.278 [2024-07-11 16:35:02.018796] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:19:25.278 [2024-07-11 16:35:02.018921] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.278 pt4 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.278 16:35:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.537 16:35:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:25.537 "name": "raid_bdev1", 00:19:25.537 "uuid": "f2ef2bff-be5e-49b6-ad0c-5b2da622c385", 00:19:25.537 "strip_size_kb": 64, 00:19:25.537 "state": "online", 00:19:25.537 "raid_level": "concat", 00:19:25.537 "superblock": true, 00:19:25.537 "num_base_bdevs": 4, 00:19:25.537 "num_base_bdevs_discovered": 4, 00:19:25.537 "num_base_bdevs_operational": 4, 00:19:25.537 "base_bdevs_list": [ 00:19:25.537 { 00:19:25.537 "name": "pt1", 00:19:25.537 "uuid": "136238c6-4642-523c-8b5e-da54575faf47", 00:19:25.537 "is_configured": true, 00:19:25.537 "data_offset": 2048, 00:19:25.537 "data_size": 63488 00:19:25.537 }, 00:19:25.537 { 00:19:25.537 "name": "pt2", 00:19:25.537 "uuid": "d82f3036-c0b8-57b8-9012-f5c17438fbc0", 00:19:25.537 "is_configured": true, 00:19:25.537 "data_offset": 2048, 00:19:25.537 "data_size": 63488 00:19:25.537 }, 00:19:25.537 { 00:19:25.537 "name": "pt3", 00:19:25.537 "uuid": "eb03b9aa-9bb4-5a0a-90af-d855c817985a", 00:19:25.537 "is_configured": true, 00:19:25.537 "data_offset": 2048, 00:19:25.537 "data_size": 63488 00:19:25.537 }, 00:19:25.537 { 00:19:25.537 "name": "pt4", 00:19:25.537 "uuid": "ca7671c1-c1a2-517c-8d73-5f4a0caaa9bf", 00:19:25.537 "is_configured": true, 00:19:25.537 "data_offset": 2048, 00:19:25.537 "data_size": 63488 00:19:25.537 } 00:19:25.537 ] 00:19:25.537 }' 00:19:25.537 16:35:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:25.537 16:35:02 -- common/autotest_common.sh@10 -- # set +x 00:19:26.472 16:35:02 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:26.472 16:35:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:26.472 [2024-07-11 16:35:03.163730] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:26.472 16:35:03 -- bdev/bdev_raid.sh@430 -- # '[' f2ef2bff-be5e-49b6-ad0c-5b2da622c385 '!=' f2ef2bff-be5e-49b6-ad0c-5b2da622c385 ']' 00:19:26.472 16:35:03 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:19:26.472 16:35:03 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:26.472 16:35:03 -- bdev/bdev_raid.sh@197 -- # return 1 00:19:26.472 16:35:03 -- bdev/bdev_raid.sh@511 -- # killprocess 123558 00:19:26.472 16:35:03 -- common/autotest_common.sh@926 -- # '[' -z 123558 ']' 00:19:26.472 16:35:03 -- common/autotest_common.sh@930 -- # kill -0 123558 00:19:26.472 16:35:03 -- common/autotest_common.sh@931 -- # uname 00:19:26.472 16:35:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:26.472 16:35:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123558 00:19:26.472 killing process with pid 123558 00:19:26.472 16:35:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:26.472 16:35:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:26.472 16:35:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123558' 00:19:26.472 16:35:03 -- common/autotest_common.sh@945 -- # kill 123558 00:19:26.472 16:35:03 -- common/autotest_common.sh@950 -- # wait 123558 00:19:26.472 [2024-07-11 16:35:03.190971] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.472 [2024-07-11 16:35:03.191100] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.472 [2024-07-11 16:35:03.191166] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.472 [2024-07-11 16:35:03.191184] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:19:26.729 [2024-07-11 16:35:03.441846] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.663 ************************************ 00:19:27.663 END TEST raid_superblock_test 00:19:27.663 ************************************ 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:27.663 00:19:27.663 real 0m10.941s 00:19:27.663 user 0m19.319s 00:19:27.663 sys 0m1.177s 00:19:27.663 16:35:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.663 16:35:04 -- common/autotest_common.sh@10 -- # set +x 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:19:27.663 16:35:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:27.663 16:35:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:27.663 16:35:04 -- common/autotest_common.sh@10 -- # set +x 00:19:27.663 ************************************ 00:19:27.663 START TEST raid_state_function_test 00:19:27.663 ************************************ 00:19:27.663 16:35:04 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=123889 00:19:27.663 Process raid pid: 123889 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123889' 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123889 /var/tmp/spdk-raid.sock 00:19:27.663 16:35:04 -- common/autotest_common.sh@819 -- # '[' -z 123889 ']' 00:19:27.663 16:35:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:27.663 16:35:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:27.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:27.663 16:35:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:27.663 16:35:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:27.663 16:35:04 -- common/autotest_common.sh@10 -- # set +x 00:19:27.663 16:35:04 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:27.663 [2024-07-11 16:35:04.466768] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:27.663 [2024-07-11 16:35:04.467770] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.921 [2024-07-11 16:35:04.638355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.179 [2024-07-11 16:35:04.838370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.438 [2024-07-11 16:35:05.003423] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.696 16:35:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:28.696 16:35:05 -- common/autotest_common.sh@852 -- # return 0 00:19:28.696 16:35:05 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:28.954 [2024-07-11 16:35:05.505820] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.954 [2024-07-11 16:35:05.505887] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.954 [2024-07-11 16:35:05.505899] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.954 [2024-07-11 16:35:05.505919] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.954 [2024-07-11 16:35:05.505926] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:28.954 [2024-07-11 16:35:05.505960] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:28.954 [2024-07-11 16:35:05.505969] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:28.954 [2024-07-11 16:35:05.505997] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.954 16:35:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.212 16:35:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.212 "name": "Existed_Raid", 00:19:29.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.212 "strip_size_kb": 0, 00:19:29.212 "state": "configuring", 00:19:29.212 "raid_level": "raid1", 00:19:29.212 "superblock": false, 00:19:29.212 "num_base_bdevs": 4, 00:19:29.212 "num_base_bdevs_discovered": 0, 00:19:29.212 "num_base_bdevs_operational": 4, 00:19:29.212 "base_bdevs_list": [ 00:19:29.212 { 00:19:29.212 "name": "BaseBdev1", 00:19:29.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.212 "is_configured": false, 00:19:29.212 "data_offset": 0, 00:19:29.212 "data_size": 0 00:19:29.212 }, 00:19:29.212 { 00:19:29.212 "name": "BaseBdev2", 00:19:29.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.212 "is_configured": false, 00:19:29.212 "data_offset": 0, 00:19:29.212 "data_size": 0 00:19:29.212 }, 00:19:29.212 { 00:19:29.212 "name": "BaseBdev3", 00:19:29.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.212 "is_configured": false, 00:19:29.212 "data_offset": 0, 00:19:29.212 "data_size": 0 00:19:29.212 }, 00:19:29.212 { 00:19:29.212 "name": "BaseBdev4", 00:19:29.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.212 "is_configured": false, 00:19:29.212 "data_offset": 0, 00:19:29.212 "data_size": 0 00:19:29.212 } 00:19:29.212 ] 00:19:29.212 }' 00:19:29.212 16:35:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.212 16:35:05 -- common/autotest_common.sh@10 -- # set +x 00:19:29.778 16:35:06 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:30.036 [2024-07-11 16:35:06.653855] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:30.036 [2024-07-11 16:35:06.653907] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:30.036 16:35:06 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:30.294 [2024-07-11 16:35:06.909903] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:30.294 [2024-07-11 16:35:06.909948] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:30.294 [2024-07-11 16:35:06.909957] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.294 [2024-07-11 16:35:06.909982] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.294 [2024-07-11 16:35:06.909990] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:30.294 [2024-07-11 16:35:06.910017] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:30.294 [2024-07-11 16:35:06.910024] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:30.294 [2024-07-11 16:35:06.910042] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:30.294 16:35:06 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:30.553 [2024-07-11 16:35:07.123363] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.553 BaseBdev1 00:19:30.553 16:35:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:30.553 16:35:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:30.553 16:35:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:30.553 16:35:07 -- common/autotest_common.sh@889 -- # local i 00:19:30.553 16:35:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:30.553 16:35:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:30.553 16:35:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.811 16:35:07 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:30.811 [ 00:19:30.811 { 00:19:30.811 "name": "BaseBdev1", 00:19:30.811 "aliases": [ 00:19:30.811 "0fbfb608-8623-44a5-a57a-f414725b11e0" 00:19:30.811 ], 00:19:30.811 "product_name": "Malloc disk", 00:19:30.811 "block_size": 512, 00:19:30.811 "num_blocks": 65536, 00:19:30.811 "uuid": "0fbfb608-8623-44a5-a57a-f414725b11e0", 00:19:30.811 "assigned_rate_limits": { 00:19:30.811 "rw_ios_per_sec": 0, 00:19:30.811 "rw_mbytes_per_sec": 0, 00:19:30.811 "r_mbytes_per_sec": 0, 00:19:30.811 "w_mbytes_per_sec": 0 00:19:30.811 }, 00:19:30.811 "claimed": true, 00:19:30.811 "claim_type": "exclusive_write", 00:19:30.811 "zoned": false, 00:19:30.811 "supported_io_types": { 00:19:30.811 "read": true, 00:19:30.811 "write": true, 00:19:30.811 "unmap": true, 00:19:30.811 "write_zeroes": true, 00:19:30.811 "flush": true, 00:19:30.811 "reset": true, 00:19:30.811 "compare": false, 00:19:30.811 "compare_and_write": false, 00:19:30.811 "abort": true, 00:19:30.811 "nvme_admin": false, 00:19:30.811 "nvme_io": false 00:19:30.811 }, 00:19:30.811 "memory_domains": [ 00:19:30.811 { 00:19:30.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.811 "dma_device_type": 2 00:19:30.811 } 00:19:30.811 ], 00:19:30.811 "driver_specific": {} 00:19:30.811 } 00:19:30.811 ] 00:19:30.811 16:35:07 -- common/autotest_common.sh@895 -- # return 0 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.811 16:35:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.070 16:35:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:31.070 "name": "Existed_Raid", 00:19:31.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.070 "strip_size_kb": 0, 00:19:31.070 "state": "configuring", 00:19:31.070 "raid_level": "raid1", 00:19:31.070 "superblock": false, 00:19:31.070 "num_base_bdevs": 4, 00:19:31.070 "num_base_bdevs_discovered": 1, 00:19:31.070 "num_base_bdevs_operational": 4, 00:19:31.070 "base_bdevs_list": [ 00:19:31.070 { 00:19:31.070 "name": "BaseBdev1", 00:19:31.070 "uuid": "0fbfb608-8623-44a5-a57a-f414725b11e0", 00:19:31.070 "is_configured": true, 00:19:31.070 "data_offset": 0, 00:19:31.070 "data_size": 65536 00:19:31.070 }, 00:19:31.070 { 00:19:31.070 "name": "BaseBdev2", 00:19:31.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.070 "is_configured": false, 00:19:31.070 "data_offset": 0, 00:19:31.070 "data_size": 0 00:19:31.070 }, 00:19:31.070 { 00:19:31.070 "name": "BaseBdev3", 00:19:31.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.070 "is_configured": false, 00:19:31.070 "data_offset": 0, 00:19:31.070 "data_size": 0 00:19:31.070 }, 00:19:31.070 { 00:19:31.070 "name": "BaseBdev4", 00:19:31.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.070 "is_configured": false, 00:19:31.070 "data_offset": 0, 00:19:31.070 "data_size": 0 00:19:31.070 } 00:19:31.070 ] 00:19:31.070 }' 00:19:31.070 16:35:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:31.070 16:35:07 -- common/autotest_common.sh@10 -- # set +x 00:19:31.637 16:35:08 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:31.923 [2024-07-11 16:35:08.583626] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:31.923 [2024-07-11 16:35:08.583671] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:31.923 16:35:08 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:19:31.923 16:35:08 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:32.182 [2024-07-11 16:35:08.775702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.182 [2024-07-11 16:35:08.777528] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.182 [2024-07-11 16:35:08.777640] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.182 [2024-07-11 16:35:08.777653] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:32.182 [2024-07-11 16:35:08.777676] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:32.182 [2024-07-11 16:35:08.777684] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:32.182 [2024-07-11 16:35:08.777699] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.182 16:35:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.441 16:35:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.441 "name": "Existed_Raid", 00:19:32.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.441 "strip_size_kb": 0, 00:19:32.441 "state": "configuring", 00:19:32.441 "raid_level": "raid1", 00:19:32.441 "superblock": false, 00:19:32.441 "num_base_bdevs": 4, 00:19:32.441 "num_base_bdevs_discovered": 1, 00:19:32.441 "num_base_bdevs_operational": 4, 00:19:32.441 "base_bdevs_list": [ 00:19:32.441 { 00:19:32.441 "name": "BaseBdev1", 00:19:32.441 "uuid": "0fbfb608-8623-44a5-a57a-f414725b11e0", 00:19:32.441 "is_configured": true, 00:19:32.441 "data_offset": 0, 00:19:32.441 "data_size": 65536 00:19:32.441 }, 00:19:32.441 { 00:19:32.441 "name": "BaseBdev2", 00:19:32.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.441 "is_configured": false, 00:19:32.441 "data_offset": 0, 00:19:32.441 "data_size": 0 00:19:32.441 }, 00:19:32.441 { 00:19:32.441 "name": "BaseBdev3", 00:19:32.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.441 "is_configured": false, 00:19:32.441 "data_offset": 0, 00:19:32.441 "data_size": 0 00:19:32.441 }, 00:19:32.441 { 00:19:32.441 "name": "BaseBdev4", 00:19:32.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.441 "is_configured": false, 00:19:32.441 "data_offset": 0, 00:19:32.441 "data_size": 0 00:19:32.441 } 00:19:32.441 ] 00:19:32.441 }' 00:19:32.441 16:35:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.441 16:35:09 -- common/autotest_common.sh@10 -- # set +x 00:19:33.009 16:35:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:33.268 [2024-07-11 16:35:09.918296] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.268 BaseBdev2 00:19:33.268 16:35:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:33.268 16:35:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:33.268 16:35:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:33.268 16:35:09 -- common/autotest_common.sh@889 -- # local i 00:19:33.268 16:35:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:33.268 16:35:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:33.268 16:35:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:33.527 16:35:10 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:33.787 [ 00:19:33.787 { 00:19:33.787 "name": "BaseBdev2", 00:19:33.787 "aliases": [ 00:19:33.787 "f5bbb44b-6d4c-4523-8d15-d568e6de761b" 00:19:33.787 ], 00:19:33.787 "product_name": "Malloc disk", 00:19:33.787 "block_size": 512, 00:19:33.787 "num_blocks": 65536, 00:19:33.787 "uuid": "f5bbb44b-6d4c-4523-8d15-d568e6de761b", 00:19:33.787 "assigned_rate_limits": { 00:19:33.787 "rw_ios_per_sec": 0, 00:19:33.787 "rw_mbytes_per_sec": 0, 00:19:33.787 "r_mbytes_per_sec": 0, 00:19:33.787 "w_mbytes_per_sec": 0 00:19:33.787 }, 00:19:33.787 "claimed": true, 00:19:33.787 "claim_type": "exclusive_write", 00:19:33.787 "zoned": false, 00:19:33.787 "supported_io_types": { 00:19:33.787 "read": true, 00:19:33.787 "write": true, 00:19:33.787 "unmap": true, 00:19:33.787 "write_zeroes": true, 00:19:33.787 "flush": true, 00:19:33.787 "reset": true, 00:19:33.787 "compare": false, 00:19:33.787 "compare_and_write": false, 00:19:33.787 "abort": true, 00:19:33.787 "nvme_admin": false, 00:19:33.787 "nvme_io": false 00:19:33.787 }, 00:19:33.787 "memory_domains": [ 00:19:33.787 { 00:19:33.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.787 "dma_device_type": 2 00:19:33.787 } 00:19:33.787 ], 00:19:33.787 "driver_specific": {} 00:19:33.787 } 00:19:33.787 ] 00:19:33.787 16:35:10 -- common/autotest_common.sh@895 -- # return 0 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:33.787 "name": "Existed_Raid", 00:19:33.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.787 "strip_size_kb": 0, 00:19:33.787 "state": "configuring", 00:19:33.787 "raid_level": "raid1", 00:19:33.787 "superblock": false, 00:19:33.787 "num_base_bdevs": 4, 00:19:33.787 "num_base_bdevs_discovered": 2, 00:19:33.787 "num_base_bdevs_operational": 4, 00:19:33.787 "base_bdevs_list": [ 00:19:33.787 { 00:19:33.787 "name": "BaseBdev1", 00:19:33.787 "uuid": "0fbfb608-8623-44a5-a57a-f414725b11e0", 00:19:33.787 "is_configured": true, 00:19:33.787 "data_offset": 0, 00:19:33.787 "data_size": 65536 00:19:33.787 }, 00:19:33.787 { 00:19:33.787 "name": "BaseBdev2", 00:19:33.787 "uuid": "f5bbb44b-6d4c-4523-8d15-d568e6de761b", 00:19:33.787 "is_configured": true, 00:19:33.787 "data_offset": 0, 00:19:33.787 "data_size": 65536 00:19:33.787 }, 00:19:33.787 { 00:19:33.787 "name": "BaseBdev3", 00:19:33.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.787 "is_configured": false, 00:19:33.787 "data_offset": 0, 00:19:33.787 "data_size": 0 00:19:33.787 }, 00:19:33.787 { 00:19:33.787 "name": "BaseBdev4", 00:19:33.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:33.787 "is_configured": false, 00:19:33.787 "data_offset": 0, 00:19:33.787 "data_size": 0 00:19:33.787 } 00:19:33.787 ] 00:19:33.787 }' 00:19:33.787 16:35:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:33.787 16:35:10 -- common/autotest_common.sh@10 -- # set +x 00:19:34.723 16:35:11 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:34.723 [2024-07-11 16:35:11.390040] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:34.723 BaseBdev3 00:19:34.723 16:35:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:34.723 16:35:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:34.723 16:35:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:34.723 16:35:11 -- common/autotest_common.sh@889 -- # local i 00:19:34.723 16:35:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:34.723 16:35:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:34.723 16:35:11 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:34.981 16:35:11 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:35.240 [ 00:19:35.240 { 00:19:35.240 "name": "BaseBdev3", 00:19:35.240 "aliases": [ 00:19:35.240 "a4ca2a47-cffa-4d01-bc86-f94455c5ed42" 00:19:35.240 ], 00:19:35.240 "product_name": "Malloc disk", 00:19:35.240 "block_size": 512, 00:19:35.240 "num_blocks": 65536, 00:19:35.240 "uuid": "a4ca2a47-cffa-4d01-bc86-f94455c5ed42", 00:19:35.240 "assigned_rate_limits": { 00:19:35.240 "rw_ios_per_sec": 0, 00:19:35.240 "rw_mbytes_per_sec": 0, 00:19:35.240 "r_mbytes_per_sec": 0, 00:19:35.240 "w_mbytes_per_sec": 0 00:19:35.240 }, 00:19:35.240 "claimed": true, 00:19:35.240 "claim_type": "exclusive_write", 00:19:35.240 "zoned": false, 00:19:35.240 "supported_io_types": { 00:19:35.240 "read": true, 00:19:35.240 "write": true, 00:19:35.240 "unmap": true, 00:19:35.240 "write_zeroes": true, 00:19:35.240 "flush": true, 00:19:35.240 "reset": true, 00:19:35.240 "compare": false, 00:19:35.240 "compare_and_write": false, 00:19:35.240 "abort": true, 00:19:35.240 "nvme_admin": false, 00:19:35.240 "nvme_io": false 00:19:35.240 }, 00:19:35.240 "memory_domains": [ 00:19:35.240 { 00:19:35.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.240 "dma_device_type": 2 00:19:35.240 } 00:19:35.240 ], 00:19:35.240 "driver_specific": {} 00:19:35.240 } 00:19:35.240 ] 00:19:35.240 16:35:11 -- common/autotest_common.sh@895 -- # return 0 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.240 16:35:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.240 16:35:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:35.240 "name": "Existed_Raid", 00:19:35.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.240 "strip_size_kb": 0, 00:19:35.240 "state": "configuring", 00:19:35.240 "raid_level": "raid1", 00:19:35.240 "superblock": false, 00:19:35.240 "num_base_bdevs": 4, 00:19:35.240 "num_base_bdevs_discovered": 3, 00:19:35.240 "num_base_bdevs_operational": 4, 00:19:35.240 "base_bdevs_list": [ 00:19:35.240 { 00:19:35.240 "name": "BaseBdev1", 00:19:35.240 "uuid": "0fbfb608-8623-44a5-a57a-f414725b11e0", 00:19:35.240 "is_configured": true, 00:19:35.240 "data_offset": 0, 00:19:35.240 "data_size": 65536 00:19:35.240 }, 00:19:35.240 { 00:19:35.240 "name": "BaseBdev2", 00:19:35.240 "uuid": "f5bbb44b-6d4c-4523-8d15-d568e6de761b", 00:19:35.240 "is_configured": true, 00:19:35.240 "data_offset": 0, 00:19:35.240 "data_size": 65536 00:19:35.240 }, 00:19:35.240 { 00:19:35.240 "name": "BaseBdev3", 00:19:35.240 "uuid": "a4ca2a47-cffa-4d01-bc86-f94455c5ed42", 00:19:35.240 "is_configured": true, 00:19:35.240 "data_offset": 0, 00:19:35.240 "data_size": 65536 00:19:35.240 }, 00:19:35.240 { 00:19:35.240 "name": "BaseBdev4", 00:19:35.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.240 "is_configured": false, 00:19:35.240 "data_offset": 0, 00:19:35.240 "data_size": 0 00:19:35.240 } 00:19:35.240 ] 00:19:35.240 }' 00:19:35.240 16:35:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:35.240 16:35:12 -- common/autotest_common.sh@10 -- # set +x 00:19:36.176 16:35:12 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:36.176 [2024-07-11 16:35:12.969685] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:36.176 [2024-07-11 16:35:12.969738] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:19:36.176 [2024-07-11 16:35:12.969748] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:36.176 [2024-07-11 16:35:12.969874] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:36.176 [2024-07-11 16:35:12.970236] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:19:36.176 [2024-07-11 16:35:12.970251] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:19:36.176 BaseBdev4 00:19:36.176 [2024-07-11 16:35:12.970513] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.176 16:35:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:36.176 16:35:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:36.176 16:35:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:36.176 16:35:12 -- common/autotest_common.sh@889 -- # local i 00:19:36.176 16:35:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:36.176 16:35:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:36.176 16:35:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:36.433 16:35:13 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:36.691 [ 00:19:36.691 { 00:19:36.691 "name": "BaseBdev4", 00:19:36.691 "aliases": [ 00:19:36.691 "dbafc7e4-4040-4760-a861-d330809ce32d" 00:19:36.691 ], 00:19:36.691 "product_name": "Malloc disk", 00:19:36.691 "block_size": 512, 00:19:36.691 "num_blocks": 65536, 00:19:36.691 "uuid": "dbafc7e4-4040-4760-a861-d330809ce32d", 00:19:36.691 "assigned_rate_limits": { 00:19:36.691 "rw_ios_per_sec": 0, 00:19:36.691 "rw_mbytes_per_sec": 0, 00:19:36.691 "r_mbytes_per_sec": 0, 00:19:36.691 "w_mbytes_per_sec": 0 00:19:36.691 }, 00:19:36.691 "claimed": true, 00:19:36.691 "claim_type": "exclusive_write", 00:19:36.691 "zoned": false, 00:19:36.691 "supported_io_types": { 00:19:36.692 "read": true, 00:19:36.692 "write": true, 00:19:36.692 "unmap": true, 00:19:36.692 "write_zeroes": true, 00:19:36.692 "flush": true, 00:19:36.692 "reset": true, 00:19:36.692 "compare": false, 00:19:36.692 "compare_and_write": false, 00:19:36.692 "abort": true, 00:19:36.692 "nvme_admin": false, 00:19:36.692 "nvme_io": false 00:19:36.692 }, 00:19:36.692 "memory_domains": [ 00:19:36.692 { 00:19:36.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.692 "dma_device_type": 2 00:19:36.692 } 00:19:36.692 ], 00:19:36.692 "driver_specific": {} 00:19:36.692 } 00:19:36.692 ] 00:19:36.692 16:35:13 -- common/autotest_common.sh@895 -- # return 0 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.692 16:35:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.949 16:35:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.949 "name": "Existed_Raid", 00:19:36.949 "uuid": "40a39aa7-ce06-4321-9a45-8f458acbf95b", 00:19:36.949 "strip_size_kb": 0, 00:19:36.949 "state": "online", 00:19:36.949 "raid_level": "raid1", 00:19:36.949 "superblock": false, 00:19:36.949 "num_base_bdevs": 4, 00:19:36.949 "num_base_bdevs_discovered": 4, 00:19:36.949 "num_base_bdevs_operational": 4, 00:19:36.949 "base_bdevs_list": [ 00:19:36.949 { 00:19:36.949 "name": "BaseBdev1", 00:19:36.949 "uuid": "0fbfb608-8623-44a5-a57a-f414725b11e0", 00:19:36.949 "is_configured": true, 00:19:36.949 "data_offset": 0, 00:19:36.949 "data_size": 65536 00:19:36.949 }, 00:19:36.949 { 00:19:36.949 "name": "BaseBdev2", 00:19:36.949 "uuid": "f5bbb44b-6d4c-4523-8d15-d568e6de761b", 00:19:36.949 "is_configured": true, 00:19:36.949 "data_offset": 0, 00:19:36.949 "data_size": 65536 00:19:36.949 }, 00:19:36.949 { 00:19:36.949 "name": "BaseBdev3", 00:19:36.949 "uuid": "a4ca2a47-cffa-4d01-bc86-f94455c5ed42", 00:19:36.949 "is_configured": true, 00:19:36.949 "data_offset": 0, 00:19:36.949 "data_size": 65536 00:19:36.949 }, 00:19:36.949 { 00:19:36.949 "name": "BaseBdev4", 00:19:36.949 "uuid": "dbafc7e4-4040-4760-a861-d330809ce32d", 00:19:36.949 "is_configured": true, 00:19:36.949 "data_offset": 0, 00:19:36.949 "data_size": 65536 00:19:36.949 } 00:19:36.949 ] 00:19:36.949 }' 00:19:36.949 16:35:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.950 16:35:13 -- common/autotest_common.sh@10 -- # set +x 00:19:37.515 16:35:14 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:37.774 [2024-07-11 16:35:14.397390] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.774 16:35:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.033 16:35:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:38.033 "name": "Existed_Raid", 00:19:38.033 "uuid": "40a39aa7-ce06-4321-9a45-8f458acbf95b", 00:19:38.033 "strip_size_kb": 0, 00:19:38.033 "state": "online", 00:19:38.033 "raid_level": "raid1", 00:19:38.033 "superblock": false, 00:19:38.033 "num_base_bdevs": 4, 00:19:38.033 "num_base_bdevs_discovered": 3, 00:19:38.033 "num_base_bdevs_operational": 3, 00:19:38.033 "base_bdevs_list": [ 00:19:38.033 { 00:19:38.033 "name": null, 00:19:38.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.033 "is_configured": false, 00:19:38.033 "data_offset": 0, 00:19:38.033 "data_size": 65536 00:19:38.033 }, 00:19:38.033 { 00:19:38.033 "name": "BaseBdev2", 00:19:38.033 "uuid": "f5bbb44b-6d4c-4523-8d15-d568e6de761b", 00:19:38.033 "is_configured": true, 00:19:38.033 "data_offset": 0, 00:19:38.033 "data_size": 65536 00:19:38.033 }, 00:19:38.033 { 00:19:38.033 "name": "BaseBdev3", 00:19:38.033 "uuid": "a4ca2a47-cffa-4d01-bc86-f94455c5ed42", 00:19:38.033 "is_configured": true, 00:19:38.033 "data_offset": 0, 00:19:38.033 "data_size": 65536 00:19:38.033 }, 00:19:38.033 { 00:19:38.033 "name": "BaseBdev4", 00:19:38.033 "uuid": "dbafc7e4-4040-4760-a861-d330809ce32d", 00:19:38.033 "is_configured": true, 00:19:38.033 "data_offset": 0, 00:19:38.033 "data_size": 65536 00:19:38.033 } 00:19:38.033 ] 00:19:38.033 }' 00:19:38.033 16:35:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:38.033 16:35:14 -- common/autotest_common.sh@10 -- # set +x 00:19:38.600 16:35:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:38.600 16:35:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:38.600 16:35:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.600 16:35:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:38.859 16:35:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:38.859 16:35:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:38.859 16:35:15 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:39.118 [2024-07-11 16:35:15.800898] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:39.118 16:35:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:39.118 16:35:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:39.118 16:35:15 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.118 16:35:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:39.377 16:35:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:39.377 16:35:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:39.377 16:35:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:39.635 [2024-07-11 16:35:16.355264] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:39.635 16:35:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:39.635 16:35:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:39.635 16:35:16 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:39.635 16:35:16 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.894 16:35:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:39.894 16:35:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:39.894 16:35:16 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:40.152 [2024-07-11 16:35:16.782369] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:40.152 [2024-07-11 16:35:16.782404] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:40.152 [2024-07-11 16:35:16.782467] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:40.152 [2024-07-11 16:35:16.845583] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:40.152 [2024-07-11 16:35:16.845621] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:19:40.152 16:35:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:40.152 16:35:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:40.152 16:35:16 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.152 16:35:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:40.411 16:35:17 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:40.411 16:35:17 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:40.411 16:35:17 -- bdev/bdev_raid.sh@287 -- # killprocess 123889 00:19:40.411 16:35:17 -- common/autotest_common.sh@926 -- # '[' -z 123889 ']' 00:19:40.411 16:35:17 -- common/autotest_common.sh@930 -- # kill -0 123889 00:19:40.411 16:35:17 -- common/autotest_common.sh@931 -- # uname 00:19:40.411 16:35:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:40.411 16:35:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 123889 00:19:40.411 killing process with pid 123889 00:19:40.411 16:35:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:40.411 16:35:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:40.411 16:35:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 123889' 00:19:40.411 16:35:17 -- common/autotest_common.sh@945 -- # kill 123889 00:19:40.411 16:35:17 -- common/autotest_common.sh@950 -- # wait 123889 00:19:40.411 [2024-07-11 16:35:17.066447] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:40.411 [2024-07-11 16:35:17.066578] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:41.346 ************************************ 00:19:41.346 END TEST raid_state_function_test 00:19:41.346 ************************************ 00:19:41.346 16:35:17 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:41.346 00:19:41.346 real 0m13.571s 00:19:41.346 user 0m24.564s 00:19:41.346 sys 0m1.440s 00:19:41.346 16:35:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.346 16:35:17 -- common/autotest_common.sh@10 -- # set +x 00:19:41.346 16:35:18 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:41.346 16:35:18 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:19:41.346 16:35:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:41.346 16:35:18 -- common/autotest_common.sh@10 -- # set +x 00:19:41.346 ************************************ 00:19:41.347 START TEST raid_state_function_test_sb 00:19:41.347 ************************************ 00:19:41.347 16:35:18 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@226 -- # raid_pid=124340 00:19:41.347 Process raid pid: 124340 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124340' 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:41.347 16:35:18 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124340 /var/tmp/spdk-raid.sock 00:19:41.347 16:35:18 -- common/autotest_common.sh@819 -- # '[' -z 124340 ']' 00:19:41.347 16:35:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:41.347 16:35:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:41.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:41.347 16:35:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:41.347 16:35:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:41.347 16:35:18 -- common/autotest_common.sh@10 -- # set +x 00:19:41.347 [2024-07-11 16:35:18.102109] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:41.347 [2024-07-11 16:35:18.102286] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.606 [2024-07-11 16:35:18.260322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.864 [2024-07-11 16:35:18.420685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.864 [2024-07-11 16:35:18.585943] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:42.432 16:35:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:42.432 16:35:19 -- common/autotest_common.sh@852 -- # return 0 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:42.432 [2024-07-11 16:35:19.207672] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:42.432 [2024-07-11 16:35:19.207753] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:42.432 [2024-07-11 16:35:19.207767] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:42.432 [2024-07-11 16:35:19.207789] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:42.432 [2024-07-11 16:35:19.207796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:42.432 [2024-07-11 16:35:19.207865] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:42.432 [2024-07-11 16:35:19.207873] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:42.432 [2024-07-11 16:35:19.207903] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.432 16:35:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.691 16:35:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.691 "name": "Existed_Raid", 00:19:42.691 "uuid": "1d4349bd-c6c8-4877-ab87-70926c104977", 00:19:42.691 "strip_size_kb": 0, 00:19:42.691 "state": "configuring", 00:19:42.691 "raid_level": "raid1", 00:19:42.691 "superblock": true, 00:19:42.691 "num_base_bdevs": 4, 00:19:42.691 "num_base_bdevs_discovered": 0, 00:19:42.691 "num_base_bdevs_operational": 4, 00:19:42.691 "base_bdevs_list": [ 00:19:42.691 { 00:19:42.691 "name": "BaseBdev1", 00:19:42.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.691 "is_configured": false, 00:19:42.691 "data_offset": 0, 00:19:42.691 "data_size": 0 00:19:42.691 }, 00:19:42.691 { 00:19:42.691 "name": "BaseBdev2", 00:19:42.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.691 "is_configured": false, 00:19:42.691 "data_offset": 0, 00:19:42.691 "data_size": 0 00:19:42.691 }, 00:19:42.691 { 00:19:42.691 "name": "BaseBdev3", 00:19:42.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.691 "is_configured": false, 00:19:42.691 "data_offset": 0, 00:19:42.691 "data_size": 0 00:19:42.691 }, 00:19:42.691 { 00:19:42.691 "name": "BaseBdev4", 00:19:42.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.691 "is_configured": false, 00:19:42.691 "data_offset": 0, 00:19:42.691 "data_size": 0 00:19:42.691 } 00:19:42.691 ] 00:19:42.691 }' 00:19:42.691 16:35:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.691 16:35:19 -- common/autotest_common.sh@10 -- # set +x 00:19:43.626 16:35:20 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:43.626 [2024-07-11 16:35:20.319738] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:43.626 [2024-07-11 16:35:20.319793] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:43.626 16:35:20 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:43.884 [2024-07-11 16:35:20.543818] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:43.884 [2024-07-11 16:35:20.543889] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:43.884 [2024-07-11 16:35:20.543915] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:43.884 [2024-07-11 16:35:20.543944] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:43.884 [2024-07-11 16:35:20.543952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:43.884 [2024-07-11 16:35:20.544015] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:43.884 [2024-07-11 16:35:20.544022] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:43.884 [2024-07-11 16:35:20.544044] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:43.884 16:35:20 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:44.143 [2024-07-11 16:35:20.805696] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:44.143 BaseBdev1 00:19:44.143 16:35:20 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:44.143 16:35:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:44.143 16:35:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:44.143 16:35:20 -- common/autotest_common.sh@889 -- # local i 00:19:44.143 16:35:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:44.143 16:35:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:44.143 16:35:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:44.401 16:35:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:44.401 [ 00:19:44.401 { 00:19:44.401 "name": "BaseBdev1", 00:19:44.401 "aliases": [ 00:19:44.401 "71100d87-2d17-4528-881f-f398138264c0" 00:19:44.401 ], 00:19:44.401 "product_name": "Malloc disk", 00:19:44.401 "block_size": 512, 00:19:44.401 "num_blocks": 65536, 00:19:44.401 "uuid": "71100d87-2d17-4528-881f-f398138264c0", 00:19:44.402 "assigned_rate_limits": { 00:19:44.402 "rw_ios_per_sec": 0, 00:19:44.402 "rw_mbytes_per_sec": 0, 00:19:44.402 "r_mbytes_per_sec": 0, 00:19:44.402 "w_mbytes_per_sec": 0 00:19:44.402 }, 00:19:44.402 "claimed": true, 00:19:44.402 "claim_type": "exclusive_write", 00:19:44.402 "zoned": false, 00:19:44.402 "supported_io_types": { 00:19:44.402 "read": true, 00:19:44.402 "write": true, 00:19:44.402 "unmap": true, 00:19:44.402 "write_zeroes": true, 00:19:44.402 "flush": true, 00:19:44.402 "reset": true, 00:19:44.402 "compare": false, 00:19:44.402 "compare_and_write": false, 00:19:44.402 "abort": true, 00:19:44.402 "nvme_admin": false, 00:19:44.402 "nvme_io": false 00:19:44.402 }, 00:19:44.402 "memory_domains": [ 00:19:44.402 { 00:19:44.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:44.402 "dma_device_type": 2 00:19:44.402 } 00:19:44.402 ], 00:19:44.402 "driver_specific": {} 00:19:44.402 } 00:19:44.402 ] 00:19:44.402 16:35:21 -- common/autotest_common.sh@895 -- # return 0 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:44.402 16:35:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.659 16:35:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:44.659 "name": "Existed_Raid", 00:19:44.659 "uuid": "c96c7162-56e7-4f0d-b197-6a0c692e1d4d", 00:19:44.659 "strip_size_kb": 0, 00:19:44.659 "state": "configuring", 00:19:44.659 "raid_level": "raid1", 00:19:44.659 "superblock": true, 00:19:44.659 "num_base_bdevs": 4, 00:19:44.659 "num_base_bdevs_discovered": 1, 00:19:44.659 "num_base_bdevs_operational": 4, 00:19:44.659 "base_bdevs_list": [ 00:19:44.659 { 00:19:44.659 "name": "BaseBdev1", 00:19:44.659 "uuid": "71100d87-2d17-4528-881f-f398138264c0", 00:19:44.659 "is_configured": true, 00:19:44.659 "data_offset": 2048, 00:19:44.660 "data_size": 63488 00:19:44.660 }, 00:19:44.660 { 00:19:44.660 "name": "BaseBdev2", 00:19:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.660 "is_configured": false, 00:19:44.660 "data_offset": 0, 00:19:44.660 "data_size": 0 00:19:44.660 }, 00:19:44.660 { 00:19:44.660 "name": "BaseBdev3", 00:19:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.660 "is_configured": false, 00:19:44.660 "data_offset": 0, 00:19:44.660 "data_size": 0 00:19:44.660 }, 00:19:44.660 { 00:19:44.660 "name": "BaseBdev4", 00:19:44.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.660 "is_configured": false, 00:19:44.660 "data_offset": 0, 00:19:44.660 "data_size": 0 00:19:44.660 } 00:19:44.660 ] 00:19:44.660 }' 00:19:44.660 16:35:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:44.660 16:35:21 -- common/autotest_common.sh@10 -- # set +x 00:19:45.225 16:35:21 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:45.483 [2024-07-11 16:35:22.197928] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:45.484 [2024-07-11 16:35:22.197976] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:45.484 16:35:22 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:45.484 16:35:22 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:45.742 16:35:22 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:46.000 BaseBdev1 00:19:46.000 16:35:22 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:46.000 16:35:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:19:46.000 16:35:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:46.000 16:35:22 -- common/autotest_common.sh@889 -- # local i 00:19:46.000 16:35:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:46.000 16:35:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:46.000 16:35:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:46.259 16:35:22 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:46.259 [ 00:19:46.259 { 00:19:46.259 "name": "BaseBdev1", 00:19:46.259 "aliases": [ 00:19:46.259 "18bc7769-c0f1-46ff-b7ee-9149f0d80ea4" 00:19:46.259 ], 00:19:46.259 "product_name": "Malloc disk", 00:19:46.259 "block_size": 512, 00:19:46.259 "num_blocks": 65536, 00:19:46.259 "uuid": "18bc7769-c0f1-46ff-b7ee-9149f0d80ea4", 00:19:46.259 "assigned_rate_limits": { 00:19:46.259 "rw_ios_per_sec": 0, 00:19:46.259 "rw_mbytes_per_sec": 0, 00:19:46.259 "r_mbytes_per_sec": 0, 00:19:46.259 "w_mbytes_per_sec": 0 00:19:46.259 }, 00:19:46.259 "claimed": false, 00:19:46.259 "zoned": false, 00:19:46.259 "supported_io_types": { 00:19:46.259 "read": true, 00:19:46.259 "write": true, 00:19:46.259 "unmap": true, 00:19:46.259 "write_zeroes": true, 00:19:46.259 "flush": true, 00:19:46.259 "reset": true, 00:19:46.259 "compare": false, 00:19:46.259 "compare_and_write": false, 00:19:46.259 "abort": true, 00:19:46.259 "nvme_admin": false, 00:19:46.259 "nvme_io": false 00:19:46.259 }, 00:19:46.259 "memory_domains": [ 00:19:46.259 { 00:19:46.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:46.259 "dma_device_type": 2 00:19:46.259 } 00:19:46.259 ], 00:19:46.259 "driver_specific": {} 00:19:46.259 } 00:19:46.259 ] 00:19:46.259 16:35:23 -- common/autotest_common.sh@895 -- # return 0 00:19:46.259 16:35:23 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:46.518 [2024-07-11 16:35:23.196593] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.518 [2024-07-11 16:35:23.198388] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:46.518 [2024-07-11 16:35:23.198459] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:46.518 [2024-07-11 16:35:23.198487] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:46.518 [2024-07-11 16:35:23.198510] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:46.518 [2024-07-11 16:35:23.198518] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:46.518 [2024-07-11 16:35:23.198532] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.518 16:35:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.789 16:35:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.789 "name": "Existed_Raid", 00:19:46.789 "uuid": "9ca4c913-5562-41db-afc1-241b0df567fd", 00:19:46.789 "strip_size_kb": 0, 00:19:46.789 "state": "configuring", 00:19:46.789 "raid_level": "raid1", 00:19:46.789 "superblock": true, 00:19:46.789 "num_base_bdevs": 4, 00:19:46.789 "num_base_bdevs_discovered": 1, 00:19:46.789 "num_base_bdevs_operational": 4, 00:19:46.789 "base_bdevs_list": [ 00:19:46.789 { 00:19:46.789 "name": "BaseBdev1", 00:19:46.789 "uuid": "18bc7769-c0f1-46ff-b7ee-9149f0d80ea4", 00:19:46.789 "is_configured": true, 00:19:46.789 "data_offset": 2048, 00:19:46.789 "data_size": 63488 00:19:46.789 }, 00:19:46.789 { 00:19:46.789 "name": "BaseBdev2", 00:19:46.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.789 "is_configured": false, 00:19:46.789 "data_offset": 0, 00:19:46.789 "data_size": 0 00:19:46.789 }, 00:19:46.789 { 00:19:46.789 "name": "BaseBdev3", 00:19:46.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.789 "is_configured": false, 00:19:46.789 "data_offset": 0, 00:19:46.789 "data_size": 0 00:19:46.790 }, 00:19:46.790 { 00:19:46.790 "name": "BaseBdev4", 00:19:46.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.790 "is_configured": false, 00:19:46.790 "data_offset": 0, 00:19:46.790 "data_size": 0 00:19:46.790 } 00:19:46.790 ] 00:19:46.790 }' 00:19:46.790 16:35:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.790 16:35:23 -- common/autotest_common.sh@10 -- # set +x 00:19:47.365 16:35:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:47.623 [2024-07-11 16:35:24.213187] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.623 BaseBdev2 00:19:47.623 16:35:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:47.623 16:35:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:19:47.623 16:35:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:47.623 16:35:24 -- common/autotest_common.sh@889 -- # local i 00:19:47.623 16:35:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:47.623 16:35:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:47.623 16:35:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:47.623 16:35:24 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:47.881 [ 00:19:47.881 { 00:19:47.881 "name": "BaseBdev2", 00:19:47.881 "aliases": [ 00:19:47.881 "1a1a06d9-27dd-47eb-a02f-b352d2bfb29c" 00:19:47.881 ], 00:19:47.881 "product_name": "Malloc disk", 00:19:47.881 "block_size": 512, 00:19:47.881 "num_blocks": 65536, 00:19:47.881 "uuid": "1a1a06d9-27dd-47eb-a02f-b352d2bfb29c", 00:19:47.881 "assigned_rate_limits": { 00:19:47.881 "rw_ios_per_sec": 0, 00:19:47.881 "rw_mbytes_per_sec": 0, 00:19:47.881 "r_mbytes_per_sec": 0, 00:19:47.881 "w_mbytes_per_sec": 0 00:19:47.881 }, 00:19:47.881 "claimed": true, 00:19:47.881 "claim_type": "exclusive_write", 00:19:47.881 "zoned": false, 00:19:47.881 "supported_io_types": { 00:19:47.881 "read": true, 00:19:47.881 "write": true, 00:19:47.881 "unmap": true, 00:19:47.881 "write_zeroes": true, 00:19:47.881 "flush": true, 00:19:47.881 "reset": true, 00:19:47.881 "compare": false, 00:19:47.881 "compare_and_write": false, 00:19:47.881 "abort": true, 00:19:47.881 "nvme_admin": false, 00:19:47.881 "nvme_io": false 00:19:47.881 }, 00:19:47.881 "memory_domains": [ 00:19:47.881 { 00:19:47.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.881 "dma_device_type": 2 00:19:47.881 } 00:19:47.881 ], 00:19:47.881 "driver_specific": {} 00:19:47.881 } 00:19:47.881 ] 00:19:47.881 16:35:24 -- common/autotest_common.sh@895 -- # return 0 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.881 16:35:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.139 16:35:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.139 "name": "Existed_Raid", 00:19:48.139 "uuid": "9ca4c913-5562-41db-afc1-241b0df567fd", 00:19:48.139 "strip_size_kb": 0, 00:19:48.139 "state": "configuring", 00:19:48.139 "raid_level": "raid1", 00:19:48.139 "superblock": true, 00:19:48.139 "num_base_bdevs": 4, 00:19:48.139 "num_base_bdevs_discovered": 2, 00:19:48.139 "num_base_bdevs_operational": 4, 00:19:48.139 "base_bdevs_list": [ 00:19:48.139 { 00:19:48.139 "name": "BaseBdev1", 00:19:48.139 "uuid": "18bc7769-c0f1-46ff-b7ee-9149f0d80ea4", 00:19:48.139 "is_configured": true, 00:19:48.139 "data_offset": 2048, 00:19:48.139 "data_size": 63488 00:19:48.139 }, 00:19:48.139 { 00:19:48.139 "name": "BaseBdev2", 00:19:48.139 "uuid": "1a1a06d9-27dd-47eb-a02f-b352d2bfb29c", 00:19:48.139 "is_configured": true, 00:19:48.139 "data_offset": 2048, 00:19:48.139 "data_size": 63488 00:19:48.139 }, 00:19:48.139 { 00:19:48.139 "name": "BaseBdev3", 00:19:48.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.139 "is_configured": false, 00:19:48.139 "data_offset": 0, 00:19:48.139 "data_size": 0 00:19:48.139 }, 00:19:48.139 { 00:19:48.139 "name": "BaseBdev4", 00:19:48.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.139 "is_configured": false, 00:19:48.139 "data_offset": 0, 00:19:48.139 "data_size": 0 00:19:48.139 } 00:19:48.139 ] 00:19:48.139 }' 00:19:48.139 16:35:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.139 16:35:24 -- common/autotest_common.sh@10 -- # set +x 00:19:48.705 16:35:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:48.964 [2024-07-11 16:35:25.665260] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:48.964 BaseBdev3 00:19:48.964 16:35:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:48.964 16:35:25 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:19:48.964 16:35:25 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:48.964 16:35:25 -- common/autotest_common.sh@889 -- # local i 00:19:48.964 16:35:25 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:48.964 16:35:25 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:48.964 16:35:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:49.222 16:35:25 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:49.222 [ 00:19:49.222 { 00:19:49.222 "name": "BaseBdev3", 00:19:49.222 "aliases": [ 00:19:49.222 "2233d9ef-6585-48a1-9e38-92644a1f01d9" 00:19:49.222 ], 00:19:49.222 "product_name": "Malloc disk", 00:19:49.222 "block_size": 512, 00:19:49.222 "num_blocks": 65536, 00:19:49.222 "uuid": "2233d9ef-6585-48a1-9e38-92644a1f01d9", 00:19:49.222 "assigned_rate_limits": { 00:19:49.222 "rw_ios_per_sec": 0, 00:19:49.222 "rw_mbytes_per_sec": 0, 00:19:49.222 "r_mbytes_per_sec": 0, 00:19:49.222 "w_mbytes_per_sec": 0 00:19:49.222 }, 00:19:49.222 "claimed": true, 00:19:49.222 "claim_type": "exclusive_write", 00:19:49.222 "zoned": false, 00:19:49.222 "supported_io_types": { 00:19:49.222 "read": true, 00:19:49.222 "write": true, 00:19:49.222 "unmap": true, 00:19:49.222 "write_zeroes": true, 00:19:49.222 "flush": true, 00:19:49.222 "reset": true, 00:19:49.222 "compare": false, 00:19:49.222 "compare_and_write": false, 00:19:49.222 "abort": true, 00:19:49.222 "nvme_admin": false, 00:19:49.222 "nvme_io": false 00:19:49.222 }, 00:19:49.222 "memory_domains": [ 00:19:49.222 { 00:19:49.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:49.222 "dma_device_type": 2 00:19:49.222 } 00:19:49.222 ], 00:19:49.222 "driver_specific": {} 00:19:49.222 } 00:19:49.222 ] 00:19:49.481 16:35:26 -- common/autotest_common.sh@895 -- # return 0 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.481 "name": "Existed_Raid", 00:19:49.481 "uuid": "9ca4c913-5562-41db-afc1-241b0df567fd", 00:19:49.481 "strip_size_kb": 0, 00:19:49.481 "state": "configuring", 00:19:49.481 "raid_level": "raid1", 00:19:49.481 "superblock": true, 00:19:49.481 "num_base_bdevs": 4, 00:19:49.481 "num_base_bdevs_discovered": 3, 00:19:49.481 "num_base_bdevs_operational": 4, 00:19:49.481 "base_bdevs_list": [ 00:19:49.481 { 00:19:49.481 "name": "BaseBdev1", 00:19:49.481 "uuid": "18bc7769-c0f1-46ff-b7ee-9149f0d80ea4", 00:19:49.481 "is_configured": true, 00:19:49.481 "data_offset": 2048, 00:19:49.481 "data_size": 63488 00:19:49.481 }, 00:19:49.481 { 00:19:49.481 "name": "BaseBdev2", 00:19:49.481 "uuid": "1a1a06d9-27dd-47eb-a02f-b352d2bfb29c", 00:19:49.481 "is_configured": true, 00:19:49.481 "data_offset": 2048, 00:19:49.481 "data_size": 63488 00:19:49.481 }, 00:19:49.481 { 00:19:49.481 "name": "BaseBdev3", 00:19:49.481 "uuid": "2233d9ef-6585-48a1-9e38-92644a1f01d9", 00:19:49.481 "is_configured": true, 00:19:49.481 "data_offset": 2048, 00:19:49.481 "data_size": 63488 00:19:49.481 }, 00:19:49.481 { 00:19:49.481 "name": "BaseBdev4", 00:19:49.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.481 "is_configured": false, 00:19:49.481 "data_offset": 0, 00:19:49.481 "data_size": 0 00:19:49.481 } 00:19:49.481 ] 00:19:49.481 }' 00:19:49.481 16:35:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.481 16:35:26 -- common/autotest_common.sh@10 -- # set +x 00:19:50.416 16:35:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:50.416 [2024-07-11 16:35:27.116925] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:50.416 [2024-07-11 16:35:27.117204] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:50.416 [2024-07-11 16:35:27.117219] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:50.416 BaseBdev4 00:19:50.416 [2024-07-11 16:35:27.117414] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:50.416 [2024-07-11 16:35:27.117759] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:50.416 [2024-07-11 16:35:27.117784] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:50.416 [2024-07-11 16:35:27.117947] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.416 16:35:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:50.416 16:35:27 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:19:50.416 16:35:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:50.416 16:35:27 -- common/autotest_common.sh@889 -- # local i 00:19:50.416 16:35:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:50.416 16:35:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:50.416 16:35:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:50.676 16:35:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:50.934 [ 00:19:50.934 { 00:19:50.934 "name": "BaseBdev4", 00:19:50.934 "aliases": [ 00:19:50.934 "651c6af5-00a6-40d4-b243-6332858f6eb7" 00:19:50.934 ], 00:19:50.934 "product_name": "Malloc disk", 00:19:50.934 "block_size": 512, 00:19:50.934 "num_blocks": 65536, 00:19:50.934 "uuid": "651c6af5-00a6-40d4-b243-6332858f6eb7", 00:19:50.934 "assigned_rate_limits": { 00:19:50.934 "rw_ios_per_sec": 0, 00:19:50.934 "rw_mbytes_per_sec": 0, 00:19:50.934 "r_mbytes_per_sec": 0, 00:19:50.934 "w_mbytes_per_sec": 0 00:19:50.934 }, 00:19:50.934 "claimed": true, 00:19:50.934 "claim_type": "exclusive_write", 00:19:50.934 "zoned": false, 00:19:50.934 "supported_io_types": { 00:19:50.934 "read": true, 00:19:50.934 "write": true, 00:19:50.934 "unmap": true, 00:19:50.934 "write_zeroes": true, 00:19:50.934 "flush": true, 00:19:50.934 "reset": true, 00:19:50.934 "compare": false, 00:19:50.934 "compare_and_write": false, 00:19:50.934 "abort": true, 00:19:50.934 "nvme_admin": false, 00:19:50.934 "nvme_io": false 00:19:50.934 }, 00:19:50.934 "memory_domains": [ 00:19:50.934 { 00:19:50.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.934 "dma_device_type": 2 00:19:50.935 } 00:19:50.935 ], 00:19:50.935 "driver_specific": {} 00:19:50.935 } 00:19:50.935 ] 00:19:50.935 16:35:27 -- common/autotest_common.sh@895 -- # return 0 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.935 16:35:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.193 16:35:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:51.193 "name": "Existed_Raid", 00:19:51.193 "uuid": "9ca4c913-5562-41db-afc1-241b0df567fd", 00:19:51.193 "strip_size_kb": 0, 00:19:51.193 "state": "online", 00:19:51.193 "raid_level": "raid1", 00:19:51.193 "superblock": true, 00:19:51.193 "num_base_bdevs": 4, 00:19:51.193 "num_base_bdevs_discovered": 4, 00:19:51.193 "num_base_bdevs_operational": 4, 00:19:51.193 "base_bdevs_list": [ 00:19:51.193 { 00:19:51.193 "name": "BaseBdev1", 00:19:51.193 "uuid": "18bc7769-c0f1-46ff-b7ee-9149f0d80ea4", 00:19:51.193 "is_configured": true, 00:19:51.193 "data_offset": 2048, 00:19:51.193 "data_size": 63488 00:19:51.193 }, 00:19:51.193 { 00:19:51.193 "name": "BaseBdev2", 00:19:51.193 "uuid": "1a1a06d9-27dd-47eb-a02f-b352d2bfb29c", 00:19:51.193 "is_configured": true, 00:19:51.193 "data_offset": 2048, 00:19:51.193 "data_size": 63488 00:19:51.193 }, 00:19:51.193 { 00:19:51.193 "name": "BaseBdev3", 00:19:51.193 "uuid": "2233d9ef-6585-48a1-9e38-92644a1f01d9", 00:19:51.193 "is_configured": true, 00:19:51.193 "data_offset": 2048, 00:19:51.193 "data_size": 63488 00:19:51.193 }, 00:19:51.193 { 00:19:51.193 "name": "BaseBdev4", 00:19:51.193 "uuid": "651c6af5-00a6-40d4-b243-6332858f6eb7", 00:19:51.193 "is_configured": true, 00:19:51.193 "data_offset": 2048, 00:19:51.193 "data_size": 63488 00:19:51.193 } 00:19:51.193 ] 00:19:51.193 }' 00:19:51.193 16:35:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:51.193 16:35:27 -- common/autotest_common.sh@10 -- # set +x 00:19:51.759 16:35:28 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:52.018 [2024-07-11 16:35:28.641341] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.018 16:35:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:52.277 16:35:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:52.277 "name": "Existed_Raid", 00:19:52.277 "uuid": "9ca4c913-5562-41db-afc1-241b0df567fd", 00:19:52.277 "strip_size_kb": 0, 00:19:52.277 "state": "online", 00:19:52.277 "raid_level": "raid1", 00:19:52.277 "superblock": true, 00:19:52.277 "num_base_bdevs": 4, 00:19:52.277 "num_base_bdevs_discovered": 3, 00:19:52.277 "num_base_bdevs_operational": 3, 00:19:52.277 "base_bdevs_list": [ 00:19:52.277 { 00:19:52.277 "name": null, 00:19:52.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.277 "is_configured": false, 00:19:52.277 "data_offset": 2048, 00:19:52.277 "data_size": 63488 00:19:52.277 }, 00:19:52.277 { 00:19:52.277 "name": "BaseBdev2", 00:19:52.277 "uuid": "1a1a06d9-27dd-47eb-a02f-b352d2bfb29c", 00:19:52.277 "is_configured": true, 00:19:52.277 "data_offset": 2048, 00:19:52.277 "data_size": 63488 00:19:52.277 }, 00:19:52.277 { 00:19:52.277 "name": "BaseBdev3", 00:19:52.277 "uuid": "2233d9ef-6585-48a1-9e38-92644a1f01d9", 00:19:52.277 "is_configured": true, 00:19:52.277 "data_offset": 2048, 00:19:52.277 "data_size": 63488 00:19:52.277 }, 00:19:52.277 { 00:19:52.277 "name": "BaseBdev4", 00:19:52.277 "uuid": "651c6af5-00a6-40d4-b243-6332858f6eb7", 00:19:52.277 "is_configured": true, 00:19:52.277 "data_offset": 2048, 00:19:52.277 "data_size": 63488 00:19:52.277 } 00:19:52.277 ] 00:19:52.277 }' 00:19:52.277 16:35:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:52.277 16:35:28 -- common/autotest_common.sh@10 -- # set +x 00:19:52.844 16:35:29 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:52.844 16:35:29 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:52.844 16:35:29 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:52.844 16:35:29 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.103 16:35:29 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:53.103 16:35:29 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:53.103 16:35:29 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:53.361 [2024-07-11 16:35:29.938591] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:53.361 16:35:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:53.361 16:35:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:53.361 16:35:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:53.361 16:35:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.620 16:35:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:53.620 16:35:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:53.620 16:35:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:53.620 [2024-07-11 16:35:30.417367] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:53.878 16:35:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:53.878 16:35:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:53.878 16:35:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.878 16:35:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:54.137 16:35:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:54.137 16:35:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:54.137 16:35:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:54.137 [2024-07-11 16:35:30.881343] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:54.137 [2024-07-11 16:35:30.881390] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:54.137 [2024-07-11 16:35:30.881498] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:54.396 [2024-07-11 16:35:30.949880] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:54.396 [2024-07-11 16:35:30.949932] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:54.396 16:35:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:54.396 16:35:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:54.396 16:35:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.396 16:35:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:54.396 16:35:31 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:54.396 16:35:31 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:54.396 16:35:31 -- bdev/bdev_raid.sh@287 -- # killprocess 124340 00:19:54.396 16:35:31 -- common/autotest_common.sh@926 -- # '[' -z 124340 ']' 00:19:54.396 16:35:31 -- common/autotest_common.sh@930 -- # kill -0 124340 00:19:54.396 16:35:31 -- common/autotest_common.sh@931 -- # uname 00:19:54.396 16:35:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:54.396 16:35:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124340 00:19:54.396 killing process with pid 124340 00:19:54.396 16:35:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:54.396 16:35:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:54.396 16:35:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124340' 00:19:54.396 16:35:31 -- common/autotest_common.sh@945 -- # kill 124340 00:19:54.396 16:35:31 -- common/autotest_common.sh@950 -- # wait 124340 00:19:54.396 [2024-07-11 16:35:31.170236] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:54.396 [2024-07-11 16:35:31.170380] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:55.332 ************************************ 00:19:55.332 END TEST raid_state_function_test_sb 00:19:55.332 ************************************ 00:19:55.332 16:35:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:55.333 00:19:55.333 real 0m14.042s 00:19:55.333 user 0m25.239s 00:19:55.333 sys 0m1.477s 00:19:55.333 16:35:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.333 16:35:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:55.333 16:35:32 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:55.333 16:35:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:55.333 16:35:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.333 ************************************ 00:19:55.333 START TEST raid_superblock_test 00:19:55.333 ************************************ 00:19:55.333 16:35:32 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@357 -- # raid_pid=124823 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@358 -- # waitforlisten 124823 /var/tmp/spdk-raid.sock 00:19:55.333 16:35:32 -- common/autotest_common.sh@819 -- # '[' -z 124823 ']' 00:19:55.333 16:35:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:55.333 16:35:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:55.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:55.333 16:35:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:55.333 16:35:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:55.333 16:35:32 -- common/autotest_common.sh@10 -- # set +x 00:19:55.333 16:35:32 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:55.591 [2024-07-11 16:35:32.188156] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:55.591 [2024-07-11 16:35:32.189278] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124823 ] 00:19:55.591 [2024-07-11 16:35:32.363085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.850 [2024-07-11 16:35:32.553770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.108 [2024-07-11 16:35:32.716048] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:56.375 16:35:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:56.375 16:35:33 -- common/autotest_common.sh@852 -- # return 0 00:19:56.375 16:35:33 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:56.375 16:35:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:56.375 16:35:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:56.375 16:35:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:56.375 16:35:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:56.375 16:35:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:56.375 16:35:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:56.375 16:35:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:56.375 16:35:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:56.640 malloc1 00:19:56.640 16:35:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:56.640 [2024-07-11 16:35:33.405765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:56.640 [2024-07-11 16:35:33.405861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:56.640 [2024-07-11 16:35:33.405893] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:56.640 [2024-07-11 16:35:33.405940] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:56.640 [2024-07-11 16:35:33.408232] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:56.640 [2024-07-11 16:35:33.408292] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:56.640 pt1 00:19:56.640 16:35:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:56.640 16:35:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:56.640 16:35:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:56.640 16:35:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:56.640 16:35:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:56.640 16:35:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:56.640 16:35:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:56.640 16:35:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:56.640 16:35:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:56.898 malloc2 00:19:56.898 16:35:33 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:57.156 [2024-07-11 16:35:33.847091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:57.156 [2024-07-11 16:35:33.847157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.156 [2024-07-11 16:35:33.847194] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:57.156 [2024-07-11 16:35:33.847242] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.156 [2024-07-11 16:35:33.849140] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.156 [2024-07-11 16:35:33.849194] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:57.156 pt2 00:19:57.156 16:35:33 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:57.156 16:35:33 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:57.156 16:35:33 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:57.156 16:35:33 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:57.156 16:35:33 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:57.156 16:35:33 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:57.156 16:35:33 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:57.156 16:35:33 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:57.156 16:35:33 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:57.413 malloc3 00:19:57.414 16:35:34 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:57.671 [2024-07-11 16:35:34.235616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:57.671 [2024-07-11 16:35:34.235681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.671 [2024-07-11 16:35:34.235715] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:57.671 [2024-07-11 16:35:34.235753] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.671 [2024-07-11 16:35:34.237698] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.671 [2024-07-11 16:35:34.237745] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:57.671 pt3 00:19:57.671 16:35:34 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:57.671 16:35:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:57.671 16:35:34 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:57.671 16:35:34 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:57.671 16:35:34 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:57.671 16:35:34 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:57.671 16:35:34 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:57.671 16:35:34 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:57.671 16:35:34 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:57.671 malloc4 00:19:57.671 16:35:34 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:57.929 [2024-07-11 16:35:34.685749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:57.929 [2024-07-11 16:35:34.685823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.929 [2024-07-11 16:35:34.685861] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:57.929 [2024-07-11 16:35:34.685900] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.929 [2024-07-11 16:35:34.687779] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.929 [2024-07-11 16:35:34.687823] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:57.929 pt4 00:19:57.929 16:35:34 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:57.929 16:35:34 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:57.929 16:35:34 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:58.189 [2024-07-11 16:35:34.869810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:58.189 [2024-07-11 16:35:34.871377] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:58.189 [2024-07-11 16:35:34.871447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:58.189 [2024-07-11 16:35:34.871503] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:58.189 [2024-07-11 16:35:34.871748] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:58.189 [2024-07-11 16:35:34.871774] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:58.189 [2024-07-11 16:35:34.871932] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:58.189 [2024-07-11 16:35:34.872272] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:58.189 [2024-07-11 16:35:34.872296] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:58.189 [2024-07-11 16:35:34.872463] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.189 16:35:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.462 16:35:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.462 "name": "raid_bdev1", 00:19:58.462 "uuid": "d32bc84c-a394-4774-aab0-738d1288c571", 00:19:58.462 "strip_size_kb": 0, 00:19:58.462 "state": "online", 00:19:58.462 "raid_level": "raid1", 00:19:58.462 "superblock": true, 00:19:58.462 "num_base_bdevs": 4, 00:19:58.462 "num_base_bdevs_discovered": 4, 00:19:58.462 "num_base_bdevs_operational": 4, 00:19:58.462 "base_bdevs_list": [ 00:19:58.462 { 00:19:58.462 "name": "pt1", 00:19:58.462 "uuid": "d1d906d6-68a4-5eca-ac2d-5b4233b90b2c", 00:19:58.462 "is_configured": true, 00:19:58.462 "data_offset": 2048, 00:19:58.462 "data_size": 63488 00:19:58.462 }, 00:19:58.462 { 00:19:58.462 "name": "pt2", 00:19:58.462 "uuid": "179869c7-e6b5-5cef-9d81-8ba225f6d1a9", 00:19:58.462 "is_configured": true, 00:19:58.462 "data_offset": 2048, 00:19:58.462 "data_size": 63488 00:19:58.462 }, 00:19:58.462 { 00:19:58.462 "name": "pt3", 00:19:58.462 "uuid": "e6a8386c-4e39-5980-9839-d68290e11436", 00:19:58.462 "is_configured": true, 00:19:58.462 "data_offset": 2048, 00:19:58.462 "data_size": 63488 00:19:58.462 }, 00:19:58.462 { 00:19:58.462 "name": "pt4", 00:19:58.462 "uuid": "b3af08e5-c3ef-578e-a2d1-9a53b9e096d1", 00:19:58.462 "is_configured": true, 00:19:58.462 "data_offset": 2048, 00:19:58.462 "data_size": 63488 00:19:58.462 } 00:19:58.462 ] 00:19:58.462 }' 00:19:58.462 16:35:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.462 16:35:35 -- common/autotest_common.sh@10 -- # set +x 00:19:59.028 16:35:35 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:59.028 16:35:35 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:59.285 [2024-07-11 16:35:35.898123] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:59.285 16:35:35 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d32bc84c-a394-4774-aab0-738d1288c571 00:19:59.285 16:35:35 -- bdev/bdev_raid.sh@380 -- # '[' -z d32bc84c-a394-4774-aab0-738d1288c571 ']' 00:19:59.285 16:35:35 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:59.285 [2024-07-11 16:35:36.077948] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:59.285 [2024-07-11 16:35:36.077974] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:59.285 [2024-07-11 16:35:36.078034] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.285 [2024-07-11 16:35:36.078107] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:59.285 [2024-07-11 16:35:36.078119] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:59.285 16:35:36 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.285 16:35:36 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:59.543 16:35:36 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:59.543 16:35:36 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:59.543 16:35:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:59.543 16:35:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:59.800 16:35:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:59.800 16:35:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:00.059 16:35:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:00.059 16:35:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:00.318 16:35:36 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:20:00.318 16:35:36 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:00.318 16:35:37 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:00.318 16:35:37 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:00.577 16:35:37 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:20:00.577 16:35:37 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:00.577 16:35:37 -- common/autotest_common.sh@640 -- # local es=0 00:20:00.577 16:35:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:00.577 16:35:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.577 16:35:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:00.577 16:35:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.577 16:35:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:00.577 16:35:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.577 16:35:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:00.577 16:35:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.577 16:35:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:00.577 16:35:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:20:00.835 [2024-07-11 16:35:37.398129] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:00.835 [2024-07-11 16:35:37.399769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:00.835 [2024-07-11 16:35:37.399829] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:00.835 [2024-07-11 16:35:37.399869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:00.835 [2024-07-11 16:35:37.399928] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:20:00.835 [2024-07-11 16:35:37.400002] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:20:00.835 [2024-07-11 16:35:37.400069] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:20:00.835 [2024-07-11 16:35:37.400122] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:20:00.835 [2024-07-11 16:35:37.400148] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:00.836 [2024-07-11 16:35:37.400158] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:20:00.836 request: 00:20:00.836 { 00:20:00.836 "name": "raid_bdev1", 00:20:00.836 "raid_level": "raid1", 00:20:00.836 "base_bdevs": [ 00:20:00.836 "malloc1", 00:20:00.836 "malloc2", 00:20:00.836 "malloc3", 00:20:00.836 "malloc4" 00:20:00.836 ], 00:20:00.836 "superblock": false, 00:20:00.836 "method": "bdev_raid_create", 00:20:00.836 "req_id": 1 00:20:00.836 } 00:20:00.836 Got JSON-RPC error response 00:20:00.836 response: 00:20:00.836 { 00:20:00.836 "code": -17, 00:20:00.836 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:00.836 } 00:20:00.836 16:35:37 -- common/autotest_common.sh@643 -- # es=1 00:20:00.836 16:35:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:00.836 16:35:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:00.836 16:35:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:00.836 16:35:37 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.836 16:35:37 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:20:00.836 16:35:37 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:20:00.836 16:35:37 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:20:00.836 16:35:37 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:01.094 [2024-07-11 16:35:37.758153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:01.094 [2024-07-11 16:35:37.758206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.094 [2024-07-11 16:35:37.758232] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:01.094 [2024-07-11 16:35:37.758253] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.094 [2024-07-11 16:35:37.760118] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.094 [2024-07-11 16:35:37.760175] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:01.094 [2024-07-11 16:35:37.760258] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:01.094 [2024-07-11 16:35:37.760309] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:01.094 pt1 00:20:01.094 16:35:37 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:01.094 16:35:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:01.094 16:35:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:01.095 16:35:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:01.095 16:35:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:01.095 16:35:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:01.095 16:35:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:01.095 16:35:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:01.095 16:35:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:01.095 16:35:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:01.095 16:35:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:01.095 16:35:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:01.353 16:35:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:01.354 "name": "raid_bdev1", 00:20:01.354 "uuid": "d32bc84c-a394-4774-aab0-738d1288c571", 00:20:01.354 "strip_size_kb": 0, 00:20:01.354 "state": "configuring", 00:20:01.354 "raid_level": "raid1", 00:20:01.354 "superblock": true, 00:20:01.354 "num_base_bdevs": 4, 00:20:01.354 "num_base_bdevs_discovered": 1, 00:20:01.354 "num_base_bdevs_operational": 4, 00:20:01.354 "base_bdevs_list": [ 00:20:01.354 { 00:20:01.354 "name": "pt1", 00:20:01.354 "uuid": "d1d906d6-68a4-5eca-ac2d-5b4233b90b2c", 00:20:01.354 "is_configured": true, 00:20:01.354 "data_offset": 2048, 00:20:01.354 "data_size": 63488 00:20:01.354 }, 00:20:01.354 { 00:20:01.354 "name": null, 00:20:01.354 "uuid": "179869c7-e6b5-5cef-9d81-8ba225f6d1a9", 00:20:01.354 "is_configured": false, 00:20:01.354 "data_offset": 2048, 00:20:01.354 "data_size": 63488 00:20:01.354 }, 00:20:01.354 { 00:20:01.354 "name": null, 00:20:01.354 "uuid": "e6a8386c-4e39-5980-9839-d68290e11436", 00:20:01.354 "is_configured": false, 00:20:01.354 "data_offset": 2048, 00:20:01.354 "data_size": 63488 00:20:01.354 }, 00:20:01.354 { 00:20:01.354 "name": null, 00:20:01.354 "uuid": "b3af08e5-c3ef-578e-a2d1-9a53b9e096d1", 00:20:01.354 "is_configured": false, 00:20:01.354 "data_offset": 2048, 00:20:01.354 "data_size": 63488 00:20:01.354 } 00:20:01.354 ] 00:20:01.354 }' 00:20:01.354 16:35:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:01.354 16:35:38 -- common/autotest_common.sh@10 -- # set +x 00:20:01.922 16:35:38 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:20:01.922 16:35:38 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:02.180 [2024-07-11 16:35:38.866362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:02.180 [2024-07-11 16:35:38.866441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.180 [2024-07-11 16:35:38.866478] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:02.180 [2024-07-11 16:35:38.866499] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.180 [2024-07-11 16:35:38.866956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.180 [2024-07-11 16:35:38.867007] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:02.180 [2024-07-11 16:35:38.867150] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:02.180 [2024-07-11 16:35:38.867184] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:02.180 pt2 00:20:02.180 16:35:38 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:02.439 [2024-07-11 16:35:39.046392] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:02.439 16:35:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.698 16:35:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:02.698 "name": "raid_bdev1", 00:20:02.698 "uuid": "d32bc84c-a394-4774-aab0-738d1288c571", 00:20:02.698 "strip_size_kb": 0, 00:20:02.698 "state": "configuring", 00:20:02.698 "raid_level": "raid1", 00:20:02.698 "superblock": true, 00:20:02.698 "num_base_bdevs": 4, 00:20:02.698 "num_base_bdevs_discovered": 1, 00:20:02.698 "num_base_bdevs_operational": 4, 00:20:02.698 "base_bdevs_list": [ 00:20:02.698 { 00:20:02.698 "name": "pt1", 00:20:02.698 "uuid": "d1d906d6-68a4-5eca-ac2d-5b4233b90b2c", 00:20:02.698 "is_configured": true, 00:20:02.698 "data_offset": 2048, 00:20:02.698 "data_size": 63488 00:20:02.698 }, 00:20:02.698 { 00:20:02.698 "name": null, 00:20:02.698 "uuid": "179869c7-e6b5-5cef-9d81-8ba225f6d1a9", 00:20:02.698 "is_configured": false, 00:20:02.698 "data_offset": 2048, 00:20:02.698 "data_size": 63488 00:20:02.698 }, 00:20:02.698 { 00:20:02.698 "name": null, 00:20:02.698 "uuid": "e6a8386c-4e39-5980-9839-d68290e11436", 00:20:02.698 "is_configured": false, 00:20:02.698 "data_offset": 2048, 00:20:02.698 "data_size": 63488 00:20:02.698 }, 00:20:02.698 { 00:20:02.698 "name": null, 00:20:02.698 "uuid": "b3af08e5-c3ef-578e-a2d1-9a53b9e096d1", 00:20:02.698 "is_configured": false, 00:20:02.698 "data_offset": 2048, 00:20:02.698 "data_size": 63488 00:20:02.698 } 00:20:02.698 ] 00:20:02.698 }' 00:20:02.698 16:35:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:02.698 16:35:39 -- common/autotest_common.sh@10 -- # set +x 00:20:03.264 16:35:39 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:20:03.264 16:35:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:03.264 16:35:39 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:03.522 [2024-07-11 16:35:40.090641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:03.522 [2024-07-11 16:35:40.090754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.522 [2024-07-11 16:35:40.090803] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:03.522 [2024-07-11 16:35:40.090841] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.522 [2024-07-11 16:35:40.091387] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.522 [2024-07-11 16:35:40.091473] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:03.522 [2024-07-11 16:35:40.091568] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:03.522 [2024-07-11 16:35:40.091594] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:03.522 pt2 00:20:03.522 16:35:40 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:03.522 16:35:40 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:03.522 16:35:40 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:03.780 [2024-07-11 16:35:40.334640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:03.780 [2024-07-11 16:35:40.334710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.780 [2024-07-11 16:35:40.334740] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:03.780 [2024-07-11 16:35:40.334763] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.780 [2024-07-11 16:35:40.335164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.780 [2024-07-11 16:35:40.335221] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:03.780 [2024-07-11 16:35:40.335305] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:03.780 [2024-07-11 16:35:40.335329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:03.780 pt3 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:03.780 [2024-07-11 16:35:40.574707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:03.780 [2024-07-11 16:35:40.574796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.780 [2024-07-11 16:35:40.574824] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:03.780 [2024-07-11 16:35:40.574847] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.780 [2024-07-11 16:35:40.575280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.780 [2024-07-11 16:35:40.575342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:03.780 [2024-07-11 16:35:40.575436] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:03.780 [2024-07-11 16:35:40.575462] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:03.780 [2024-07-11 16:35:40.575609] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:20:03.780 [2024-07-11 16:35:40.575622] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:03.780 [2024-07-11 16:35:40.575725] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:03.780 [2024-07-11 16:35:40.576054] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:20:03.780 [2024-07-11 16:35:40.576078] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:20:03.780 [2024-07-11 16:35:40.576211] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.780 pt4 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.780 16:35:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:04.039 16:35:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.040 16:35:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.040 16:35:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:04.040 "name": "raid_bdev1", 00:20:04.040 "uuid": "d32bc84c-a394-4774-aab0-738d1288c571", 00:20:04.040 "strip_size_kb": 0, 00:20:04.040 "state": "online", 00:20:04.040 "raid_level": "raid1", 00:20:04.040 "superblock": true, 00:20:04.040 "num_base_bdevs": 4, 00:20:04.040 "num_base_bdevs_discovered": 4, 00:20:04.040 "num_base_bdevs_operational": 4, 00:20:04.040 "base_bdevs_list": [ 00:20:04.040 { 00:20:04.040 "name": "pt1", 00:20:04.040 "uuid": "d1d906d6-68a4-5eca-ac2d-5b4233b90b2c", 00:20:04.040 "is_configured": true, 00:20:04.040 "data_offset": 2048, 00:20:04.040 "data_size": 63488 00:20:04.040 }, 00:20:04.040 { 00:20:04.040 "name": "pt2", 00:20:04.040 "uuid": "179869c7-e6b5-5cef-9d81-8ba225f6d1a9", 00:20:04.040 "is_configured": true, 00:20:04.040 "data_offset": 2048, 00:20:04.040 "data_size": 63488 00:20:04.040 }, 00:20:04.040 { 00:20:04.040 "name": "pt3", 00:20:04.040 "uuid": "e6a8386c-4e39-5980-9839-d68290e11436", 00:20:04.040 "is_configured": true, 00:20:04.040 "data_offset": 2048, 00:20:04.040 "data_size": 63488 00:20:04.040 }, 00:20:04.040 { 00:20:04.040 "name": "pt4", 00:20:04.040 "uuid": "b3af08e5-c3ef-578e-a2d1-9a53b9e096d1", 00:20:04.040 "is_configured": true, 00:20:04.040 "data_offset": 2048, 00:20:04.040 "data_size": 63488 00:20:04.040 } 00:20:04.040 ] 00:20:04.040 }' 00:20:04.040 16:35:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:04.040 16:35:40 -- common/autotest_common.sh@10 -- # set +x 00:20:04.974 16:35:41 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:04.974 16:35:41 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:20:04.974 [2024-07-11 16:35:41.599097] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.974 16:35:41 -- bdev/bdev_raid.sh@430 -- # '[' d32bc84c-a394-4774-aab0-738d1288c571 '!=' d32bc84c-a394-4774-aab0-738d1288c571 ']' 00:20:04.974 16:35:41 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:20:04.974 16:35:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:20:04.974 16:35:41 -- bdev/bdev_raid.sh@196 -- # return 0 00:20:04.974 16:35:41 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:05.233 [2024-07-11 16:35:41.846975] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.233 16:35:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.492 16:35:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.492 "name": "raid_bdev1", 00:20:05.492 "uuid": "d32bc84c-a394-4774-aab0-738d1288c571", 00:20:05.492 "strip_size_kb": 0, 00:20:05.492 "state": "online", 00:20:05.492 "raid_level": "raid1", 00:20:05.492 "superblock": true, 00:20:05.492 "num_base_bdevs": 4, 00:20:05.492 "num_base_bdevs_discovered": 3, 00:20:05.492 "num_base_bdevs_operational": 3, 00:20:05.492 "base_bdevs_list": [ 00:20:05.492 { 00:20:05.492 "name": null, 00:20:05.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.492 "is_configured": false, 00:20:05.492 "data_offset": 2048, 00:20:05.492 "data_size": 63488 00:20:05.492 }, 00:20:05.492 { 00:20:05.492 "name": "pt2", 00:20:05.492 "uuid": "179869c7-e6b5-5cef-9d81-8ba225f6d1a9", 00:20:05.492 "is_configured": true, 00:20:05.492 "data_offset": 2048, 00:20:05.492 "data_size": 63488 00:20:05.492 }, 00:20:05.492 { 00:20:05.492 "name": "pt3", 00:20:05.492 "uuid": "e6a8386c-4e39-5980-9839-d68290e11436", 00:20:05.492 "is_configured": true, 00:20:05.492 "data_offset": 2048, 00:20:05.492 "data_size": 63488 00:20:05.492 }, 00:20:05.492 { 00:20:05.492 "name": "pt4", 00:20:05.492 "uuid": "b3af08e5-c3ef-578e-a2d1-9a53b9e096d1", 00:20:05.492 "is_configured": true, 00:20:05.492 "data_offset": 2048, 00:20:05.492 "data_size": 63488 00:20:05.492 } 00:20:05.492 ] 00:20:05.492 }' 00:20:05.492 16:35:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.492 16:35:42 -- common/autotest_common.sh@10 -- # set +x 00:20:06.060 16:35:42 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:06.318 [2024-07-11 16:35:42.869268] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:06.318 [2024-07-11 16:35:42.869299] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.318 [2024-07-11 16:35:42.869378] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.318 [2024-07-11 16:35:42.869454] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.318 [2024-07-11 16:35:42.869467] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:20:06.318 16:35:42 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.318 16:35:42 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:20:06.318 16:35:43 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:20:06.318 16:35:43 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:20:06.318 16:35:43 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:20:06.318 16:35:43 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:06.318 16:35:43 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:06.576 16:35:43 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:06.576 16:35:43 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:06.576 16:35:43 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:06.834 16:35:43 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:06.834 16:35:43 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:06.834 16:35:43 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:07.093 [2024-07-11 16:35:43.830281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:07.093 [2024-07-11 16:35:43.830371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:07.093 [2024-07-11 16:35:43.830403] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:07.093 [2024-07-11 16:35:43.830429] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:07.093 [2024-07-11 16:35:43.832523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:07.093 [2024-07-11 16:35:43.832584] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:07.093 [2024-07-11 16:35:43.832700] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:07.093 [2024-07-11 16:35:43.832751] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:07.093 pt2 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.093 16:35:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.351 16:35:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.351 "name": "raid_bdev1", 00:20:07.351 "uuid": "d32bc84c-a394-4774-aab0-738d1288c571", 00:20:07.351 "strip_size_kb": 0, 00:20:07.351 "state": "configuring", 00:20:07.351 "raid_level": "raid1", 00:20:07.351 "superblock": true, 00:20:07.351 "num_base_bdevs": 4, 00:20:07.351 "num_base_bdevs_discovered": 1, 00:20:07.351 "num_base_bdevs_operational": 3, 00:20:07.351 "base_bdevs_list": [ 00:20:07.351 { 00:20:07.351 "name": null, 00:20:07.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.351 "is_configured": false, 00:20:07.351 "data_offset": 2048, 00:20:07.351 "data_size": 63488 00:20:07.351 }, 00:20:07.351 { 00:20:07.351 "name": "pt2", 00:20:07.351 "uuid": "179869c7-e6b5-5cef-9d81-8ba225f6d1a9", 00:20:07.351 "is_configured": true, 00:20:07.351 "data_offset": 2048, 00:20:07.351 "data_size": 63488 00:20:07.351 }, 00:20:07.351 { 00:20:07.351 "name": null, 00:20:07.351 "uuid": "e6a8386c-4e39-5980-9839-d68290e11436", 00:20:07.351 "is_configured": false, 00:20:07.351 "data_offset": 2048, 00:20:07.351 "data_size": 63488 00:20:07.351 }, 00:20:07.351 { 00:20:07.351 "name": null, 00:20:07.351 "uuid": "b3af08e5-c3ef-578e-a2d1-9a53b9e096d1", 00:20:07.351 "is_configured": false, 00:20:07.351 "data_offset": 2048, 00:20:07.351 "data_size": 63488 00:20:07.351 } 00:20:07.351 ] 00:20:07.351 }' 00:20:07.351 16:35:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.351 16:35:44 -- common/autotest_common.sh@10 -- # set +x 00:20:07.917 16:35:44 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:07.917 16:35:44 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:07.918 16:35:44 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:08.176 [2024-07-11 16:35:44.838474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:08.176 [2024-07-11 16:35:44.838539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.176 [2024-07-11 16:35:44.838574] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:08.176 [2024-07-11 16:35:44.838600] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.176 [2024-07-11 16:35:44.838983] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.176 [2024-07-11 16:35:44.839024] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:08.176 [2024-07-11 16:35:44.839110] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:08.177 [2024-07-11 16:35:44.839136] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:08.177 pt3 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.177 16:35:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.435 16:35:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:08.435 "name": "raid_bdev1", 00:20:08.435 "uuid": "d32bc84c-a394-4774-aab0-738d1288c571", 00:20:08.435 "strip_size_kb": 0, 00:20:08.435 "state": "configuring", 00:20:08.435 "raid_level": "raid1", 00:20:08.435 "superblock": true, 00:20:08.435 "num_base_bdevs": 4, 00:20:08.435 "num_base_bdevs_discovered": 2, 00:20:08.435 "num_base_bdevs_operational": 3, 00:20:08.435 "base_bdevs_list": [ 00:20:08.435 { 00:20:08.435 "name": null, 00:20:08.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.435 "is_configured": false, 00:20:08.435 "data_offset": 2048, 00:20:08.435 "data_size": 63488 00:20:08.435 }, 00:20:08.435 { 00:20:08.435 "name": "pt2", 00:20:08.435 "uuid": "179869c7-e6b5-5cef-9d81-8ba225f6d1a9", 00:20:08.435 "is_configured": true, 00:20:08.435 "data_offset": 2048, 00:20:08.435 "data_size": 63488 00:20:08.435 }, 00:20:08.435 { 00:20:08.435 "name": "pt3", 00:20:08.435 "uuid": "e6a8386c-4e39-5980-9839-d68290e11436", 00:20:08.435 "is_configured": true, 00:20:08.435 "data_offset": 2048, 00:20:08.435 "data_size": 63488 00:20:08.435 }, 00:20:08.435 { 00:20:08.435 "name": null, 00:20:08.435 "uuid": "b3af08e5-c3ef-578e-a2d1-9a53b9e096d1", 00:20:08.435 "is_configured": false, 00:20:08.435 "data_offset": 2048, 00:20:08.435 "data_size": 63488 00:20:08.435 } 00:20:08.435 ] 00:20:08.435 }' 00:20:08.435 16:35:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:08.435 16:35:45 -- common/autotest_common.sh@10 -- # set +x 00:20:09.002 16:35:45 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:20:09.002 16:35:45 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:20:09.002 16:35:45 -- bdev/bdev_raid.sh@462 -- # i=3 00:20:09.002 16:35:45 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:09.260 [2024-07-11 16:35:45.850707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:09.260 [2024-07-11 16:35:45.850790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:09.260 [2024-07-11 16:35:45.850824] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:20:09.260 [2024-07-11 16:35:45.850843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:09.260 [2024-07-11 16:35:45.851302] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:09.260 [2024-07-11 16:35:45.851374] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:09.260 [2024-07-11 16:35:45.851483] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:09.260 [2024-07-11 16:35:45.851510] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:09.260 [2024-07-11 16:35:45.851649] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:20:09.260 [2024-07-11 16:35:45.851662] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:09.260 [2024-07-11 16:35:45.851789] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:09.260 [2024-07-11 16:35:45.852127] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:20:09.260 [2024-07-11 16:35:45.852151] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:20:09.260 [2024-07-11 16:35:45.852341] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:09.260 pt4 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.260 16:35:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.260 16:35:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:09.260 "name": "raid_bdev1", 00:20:09.260 "uuid": "d32bc84c-a394-4774-aab0-738d1288c571", 00:20:09.260 "strip_size_kb": 0, 00:20:09.260 "state": "online", 00:20:09.260 "raid_level": "raid1", 00:20:09.260 "superblock": true, 00:20:09.260 "num_base_bdevs": 4, 00:20:09.260 "num_base_bdevs_discovered": 3, 00:20:09.260 "num_base_bdevs_operational": 3, 00:20:09.260 "base_bdevs_list": [ 00:20:09.260 { 00:20:09.260 "name": null, 00:20:09.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.260 "is_configured": false, 00:20:09.260 "data_offset": 2048, 00:20:09.260 "data_size": 63488 00:20:09.260 }, 00:20:09.260 { 00:20:09.260 "name": "pt2", 00:20:09.260 "uuid": "179869c7-e6b5-5cef-9d81-8ba225f6d1a9", 00:20:09.260 "is_configured": true, 00:20:09.260 "data_offset": 2048, 00:20:09.260 "data_size": 63488 00:20:09.260 }, 00:20:09.260 { 00:20:09.260 "name": "pt3", 00:20:09.260 "uuid": "e6a8386c-4e39-5980-9839-d68290e11436", 00:20:09.260 "is_configured": true, 00:20:09.260 "data_offset": 2048, 00:20:09.260 "data_size": 63488 00:20:09.260 }, 00:20:09.260 { 00:20:09.260 "name": "pt4", 00:20:09.260 "uuid": "b3af08e5-c3ef-578e-a2d1-9a53b9e096d1", 00:20:09.260 "is_configured": true, 00:20:09.260 "data_offset": 2048, 00:20:09.260 "data_size": 63488 00:20:09.260 } 00:20:09.260 ] 00:20:09.260 }' 00:20:09.260 16:35:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:09.260 16:35:46 -- common/autotest_common.sh@10 -- # set +x 00:20:10.195 16:35:46 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:20:10.195 16:35:46 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:10.195 [2024-07-11 16:35:46.862887] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:10.195 [2024-07-11 16:35:46.862916] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:10.195 [2024-07-11 16:35:46.862970] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:10.195 [2024-07-11 16:35:46.863035] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:10.195 [2024-07-11 16:35:46.863046] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:20:10.195 16:35:46 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.195 16:35:46 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:20:10.461 16:35:47 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:20:10.461 16:35:47 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:20:10.461 16:35:47 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:10.736 [2024-07-11 16:35:47.274963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:10.736 [2024-07-11 16:35:47.275055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:10.736 [2024-07-11 16:35:47.275096] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:10.736 [2024-07-11 16:35:47.275116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:10.736 [2024-07-11 16:35:47.277174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:10.736 [2024-07-11 16:35:47.277265] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:10.736 [2024-07-11 16:35:47.277374] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:20:10.736 [2024-07-11 16:35:47.277422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:10.736 pt1 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.736 16:35:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:10.736 "name": "raid_bdev1", 00:20:10.736 "uuid": "d32bc84c-a394-4774-aab0-738d1288c571", 00:20:10.736 "strip_size_kb": 0, 00:20:10.736 "state": "configuring", 00:20:10.736 "raid_level": "raid1", 00:20:10.736 "superblock": true, 00:20:10.736 "num_base_bdevs": 4, 00:20:10.736 "num_base_bdevs_discovered": 1, 00:20:10.736 "num_base_bdevs_operational": 4, 00:20:10.736 "base_bdevs_list": [ 00:20:10.736 { 00:20:10.736 "name": "pt1", 00:20:10.736 "uuid": "d1d906d6-68a4-5eca-ac2d-5b4233b90b2c", 00:20:10.736 "is_configured": true, 00:20:10.736 "data_offset": 2048, 00:20:10.736 "data_size": 63488 00:20:10.736 }, 00:20:10.736 { 00:20:10.736 "name": null, 00:20:10.736 "uuid": "179869c7-e6b5-5cef-9d81-8ba225f6d1a9", 00:20:10.736 "is_configured": false, 00:20:10.736 "data_offset": 2048, 00:20:10.736 "data_size": 63488 00:20:10.736 }, 00:20:10.736 { 00:20:10.736 "name": null, 00:20:10.736 "uuid": "e6a8386c-4e39-5980-9839-d68290e11436", 00:20:10.736 "is_configured": false, 00:20:10.736 "data_offset": 2048, 00:20:10.737 "data_size": 63488 00:20:10.737 }, 00:20:10.737 { 00:20:10.737 "name": null, 00:20:10.737 "uuid": "b3af08e5-c3ef-578e-a2d1-9a53b9e096d1", 00:20:10.737 "is_configured": false, 00:20:10.737 "data_offset": 2048, 00:20:10.737 "data_size": 63488 00:20:10.737 } 00:20:10.737 ] 00:20:10.737 }' 00:20:10.737 16:35:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:10.737 16:35:47 -- common/autotest_common.sh@10 -- # set +x 00:20:11.672 16:35:48 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:20:11.672 16:35:48 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:11.672 16:35:48 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:11.672 16:35:48 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:11.672 16:35:48 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:11.672 16:35:48 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:11.931 16:35:48 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:11.931 16:35:48 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:11.931 16:35:48 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:20:11.931 16:35:48 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:20:11.931 16:35:48 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:20:11.931 16:35:48 -- bdev/bdev_raid.sh@489 -- # i=3 00:20:11.931 16:35:48 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:12.189 [2024-07-11 16:35:48.964663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:12.189 [2024-07-11 16:35:48.964753] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:12.189 [2024-07-11 16:35:48.964784] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:20:12.189 [2024-07-11 16:35:48.964809] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:12.189 [2024-07-11 16:35:48.965327] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:12.189 [2024-07-11 16:35:48.965417] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:12.189 [2024-07-11 16:35:48.965529] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:20:12.189 [2024-07-11 16:35:48.965544] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:12.189 [2024-07-11 16:35:48.965551] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:12.189 [2024-07-11 16:35:48.965583] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:20:12.189 [2024-07-11 16:35:48.965699] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:12.189 pt4 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.189 16:35:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.447 16:35:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.447 "name": "raid_bdev1", 00:20:12.447 "uuid": "d32bc84c-a394-4774-aab0-738d1288c571", 00:20:12.447 "strip_size_kb": 0, 00:20:12.447 "state": "configuring", 00:20:12.447 "raid_level": "raid1", 00:20:12.447 "superblock": true, 00:20:12.447 "num_base_bdevs": 4, 00:20:12.447 "num_base_bdevs_discovered": 1, 00:20:12.447 "num_base_bdevs_operational": 3, 00:20:12.447 "base_bdevs_list": [ 00:20:12.447 { 00:20:12.447 "name": null, 00:20:12.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.447 "is_configured": false, 00:20:12.447 "data_offset": 2048, 00:20:12.447 "data_size": 63488 00:20:12.447 }, 00:20:12.447 { 00:20:12.447 "name": null, 00:20:12.447 "uuid": "179869c7-e6b5-5cef-9d81-8ba225f6d1a9", 00:20:12.447 "is_configured": false, 00:20:12.447 "data_offset": 2048, 00:20:12.447 "data_size": 63488 00:20:12.447 }, 00:20:12.447 { 00:20:12.447 "name": null, 00:20:12.447 "uuid": "e6a8386c-4e39-5980-9839-d68290e11436", 00:20:12.447 "is_configured": false, 00:20:12.447 "data_offset": 2048, 00:20:12.447 "data_size": 63488 00:20:12.447 }, 00:20:12.447 { 00:20:12.447 "name": "pt4", 00:20:12.447 "uuid": "b3af08e5-c3ef-578e-a2d1-9a53b9e096d1", 00:20:12.447 "is_configured": true, 00:20:12.447 "data_offset": 2048, 00:20:12.448 "data_size": 63488 00:20:12.448 } 00:20:12.448 ] 00:20:12.448 }' 00:20:12.448 16:35:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.448 16:35:49 -- common/autotest_common.sh@10 -- # set +x 00:20:13.014 16:35:49 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:20:13.014 16:35:49 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:13.014 16:35:49 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:13.272 [2024-07-11 16:35:50.033297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:13.272 [2024-07-11 16:35:50.033419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.272 [2024-07-11 16:35:50.033464] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:20:13.272 [2024-07-11 16:35:50.033494] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.272 [2024-07-11 16:35:50.034033] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.272 [2024-07-11 16:35:50.034102] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:13.272 [2024-07-11 16:35:50.034213] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:20:13.272 [2024-07-11 16:35:50.034244] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:13.272 pt2 00:20:13.272 16:35:50 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:13.272 16:35:50 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:13.272 16:35:50 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:13.531 [2024-07-11 16:35:50.217284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:13.531 [2024-07-11 16:35:50.217396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:13.531 [2024-07-11 16:35:50.217428] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:20:13.531 [2024-07-11 16:35:50.217453] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:13.531 [2024-07-11 16:35:50.217910] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:13.531 [2024-07-11 16:35:50.218032] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:13.531 [2024-07-11 16:35:50.218141] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:20:13.531 [2024-07-11 16:35:50.218200] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:13.531 [2024-07-11 16:35:50.218333] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:20:13.531 [2024-07-11 16:35:50.218356] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:13.531 [2024-07-11 16:35:50.218471] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:20:13.531 [2024-07-11 16:35:50.218809] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:20:13.531 [2024-07-11 16:35:50.218833] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:20:13.531 [2024-07-11 16:35:50.218994] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.531 pt3 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:13.531 16:35:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.789 16:35:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:13.789 "name": "raid_bdev1", 00:20:13.789 "uuid": "d32bc84c-a394-4774-aab0-738d1288c571", 00:20:13.789 "strip_size_kb": 0, 00:20:13.789 "state": "online", 00:20:13.789 "raid_level": "raid1", 00:20:13.789 "superblock": true, 00:20:13.789 "num_base_bdevs": 4, 00:20:13.789 "num_base_bdevs_discovered": 3, 00:20:13.789 "num_base_bdevs_operational": 3, 00:20:13.789 "base_bdevs_list": [ 00:20:13.789 { 00:20:13.790 "name": null, 00:20:13.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.790 "is_configured": false, 00:20:13.790 "data_offset": 2048, 00:20:13.790 "data_size": 63488 00:20:13.790 }, 00:20:13.790 { 00:20:13.790 "name": "pt2", 00:20:13.790 "uuid": "179869c7-e6b5-5cef-9d81-8ba225f6d1a9", 00:20:13.790 "is_configured": true, 00:20:13.790 "data_offset": 2048, 00:20:13.790 "data_size": 63488 00:20:13.790 }, 00:20:13.790 { 00:20:13.790 "name": "pt3", 00:20:13.790 "uuid": "e6a8386c-4e39-5980-9839-d68290e11436", 00:20:13.790 "is_configured": true, 00:20:13.790 "data_offset": 2048, 00:20:13.790 "data_size": 63488 00:20:13.790 }, 00:20:13.790 { 00:20:13.790 "name": "pt4", 00:20:13.790 "uuid": "b3af08e5-c3ef-578e-a2d1-9a53b9e096d1", 00:20:13.790 "is_configured": true, 00:20:13.790 "data_offset": 2048, 00:20:13.790 "data_size": 63488 00:20:13.790 } 00:20:13.790 ] 00:20:13.790 }' 00:20:13.790 16:35:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:13.790 16:35:50 -- common/autotest_common.sh@10 -- # set +x 00:20:14.356 16:35:51 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:14.356 16:35:51 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:20:14.613 [2024-07-11 16:35:51.177713] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:14.613 16:35:51 -- bdev/bdev_raid.sh@506 -- # '[' d32bc84c-a394-4774-aab0-738d1288c571 '!=' d32bc84c-a394-4774-aab0-738d1288c571 ']' 00:20:14.613 16:35:51 -- bdev/bdev_raid.sh@511 -- # killprocess 124823 00:20:14.613 16:35:51 -- common/autotest_common.sh@926 -- # '[' -z 124823 ']' 00:20:14.613 16:35:51 -- common/autotest_common.sh@930 -- # kill -0 124823 00:20:14.613 16:35:51 -- common/autotest_common.sh@931 -- # uname 00:20:14.613 16:35:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:14.613 16:35:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 124823 00:20:14.613 killing process with pid 124823 00:20:14.613 16:35:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:14.613 16:35:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:14.613 16:35:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 124823' 00:20:14.613 16:35:51 -- common/autotest_common.sh@945 -- # kill 124823 00:20:14.613 16:35:51 -- common/autotest_common.sh@950 -- # wait 124823 00:20:14.613 [2024-07-11 16:35:51.211700] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:14.613 [2024-07-11 16:35:51.211759] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:14.613 [2024-07-11 16:35:51.211859] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:14.614 [2024-07-11 16:35:51.211882] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:20:14.871 [2024-07-11 16:35:51.462720] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:15.804 ************************************ 00:20:15.804 END TEST raid_superblock_test 00:20:15.804 ************************************ 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@513 -- # return 0 00:20:15.804 00:20:15.804 real 0m20.233s 00:20:15.804 user 0m37.677s 00:20:15.804 sys 0m2.077s 00:20:15.804 16:35:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.804 16:35:52 -- common/autotest_common.sh@10 -- # set +x 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:20:15.804 16:35:52 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:15.804 16:35:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:15.804 16:35:52 -- common/autotest_common.sh@10 -- # set +x 00:20:15.804 ************************************ 00:20:15.804 START TEST raid_rebuild_test 00:20:15.804 ************************************ 00:20:15.804 16:35:52 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@544 -- # raid_pid=125513 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@545 -- # waitforlisten 125513 /var/tmp/spdk-raid.sock 00:20:15.804 16:35:52 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:15.804 16:35:52 -- common/autotest_common.sh@819 -- # '[' -z 125513 ']' 00:20:15.804 16:35:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:15.804 16:35:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:15.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:15.804 16:35:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:15.805 16:35:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:15.805 16:35:52 -- common/autotest_common.sh@10 -- # set +x 00:20:15.805 [2024-07-11 16:35:52.467407] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:15.805 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:15.805 Zero copy mechanism will not be used. 00:20:15.805 [2024-07-11 16:35:52.467569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125513 ] 00:20:16.062 [2024-07-11 16:35:52.622312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.062 [2024-07-11 16:35:52.834206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.319 [2024-07-11 16:35:52.996370] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:16.577 16:35:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:16.577 16:35:53 -- common/autotest_common.sh@852 -- # return 0 00:20:16.577 16:35:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:16.577 16:35:53 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:16.577 16:35:53 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:16.835 BaseBdev1 00:20:16.835 16:35:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:16.835 16:35:53 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:16.835 16:35:53 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:17.094 BaseBdev2 00:20:17.094 16:35:53 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:17.352 spare_malloc 00:20:17.352 16:35:54 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:17.610 spare_delay 00:20:17.610 16:35:54 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:17.610 [2024-07-11 16:35:54.379511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:17.610 [2024-07-11 16:35:54.379609] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.610 [2024-07-11 16:35:54.379639] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:17.610 [2024-07-11 16:35:54.379687] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.610 [2024-07-11 16:35:54.381729] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.610 [2024-07-11 16:35:54.381778] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:17.610 spare 00:20:17.610 16:35:54 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:17.868 [2024-07-11 16:35:54.563666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:17.868 [2024-07-11 16:35:54.565329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.868 [2024-07-11 16:35:54.565434] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:20:17.868 [2024-07-11 16:35:54.565449] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:17.868 [2024-07-11 16:35:54.565607] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:20:17.868 [2024-07-11 16:35:54.565906] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:20:17.868 [2024-07-11 16:35:54.565930] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:20:17.868 [2024-07-11 16:35:54.566079] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.868 16:35:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.126 16:35:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:18.126 "name": "raid_bdev1", 00:20:18.126 "uuid": "421e8708-253d-4a34-997d-5a3b7ca34975", 00:20:18.126 "strip_size_kb": 0, 00:20:18.126 "state": "online", 00:20:18.126 "raid_level": "raid1", 00:20:18.126 "superblock": false, 00:20:18.126 "num_base_bdevs": 2, 00:20:18.126 "num_base_bdevs_discovered": 2, 00:20:18.126 "num_base_bdevs_operational": 2, 00:20:18.126 "base_bdevs_list": [ 00:20:18.126 { 00:20:18.126 "name": "BaseBdev1", 00:20:18.126 "uuid": "7503ce31-d9f4-4709-a1ea-f5e5e79b946e", 00:20:18.126 "is_configured": true, 00:20:18.126 "data_offset": 0, 00:20:18.126 "data_size": 65536 00:20:18.126 }, 00:20:18.126 { 00:20:18.126 "name": "BaseBdev2", 00:20:18.126 "uuid": "ac7a5db8-772e-497c-8ef0-56b71e00308f", 00:20:18.126 "is_configured": true, 00:20:18.126 "data_offset": 0, 00:20:18.126 "data_size": 65536 00:20:18.126 } 00:20:18.126 ] 00:20:18.126 }' 00:20:18.126 16:35:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:18.126 16:35:54 -- common/autotest_common.sh@10 -- # set +x 00:20:18.692 16:35:55 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:18.692 16:35:55 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:18.950 [2024-07-11 16:35:55.543960] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.950 16:35:55 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:18.950 16:35:55 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.950 16:35:55 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:18.950 16:35:55 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:18.950 16:35:55 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:18.950 16:35:55 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:18.950 16:35:55 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:18.950 16:35:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:18.950 16:35:55 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:18.950 16:35:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:18.950 16:35:55 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:18.950 16:35:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:18.950 16:35:55 -- bdev/nbd_common.sh@12 -- # local i 00:20:18.950 16:35:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:18.950 16:35:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:18.950 16:35:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:19.208 [2024-07-11 16:35:55.903899] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:19.208 /dev/nbd0 00:20:19.208 16:35:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:19.208 16:35:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:19.208 16:35:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:19.208 16:35:55 -- common/autotest_common.sh@857 -- # local i 00:20:19.208 16:35:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:19.208 16:35:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:19.208 16:35:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:19.208 16:35:55 -- common/autotest_common.sh@861 -- # break 00:20:19.208 16:35:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:19.208 16:35:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:19.208 16:35:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:19.208 1+0 records in 00:20:19.208 1+0 records out 00:20:19.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238542 s, 17.2 MB/s 00:20:19.208 16:35:55 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.208 16:35:55 -- common/autotest_common.sh@874 -- # size=4096 00:20:19.208 16:35:55 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:19.208 16:35:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:19.208 16:35:55 -- common/autotest_common.sh@877 -- # return 0 00:20:19.208 16:35:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:19.208 16:35:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:19.208 16:35:55 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:19.208 16:35:55 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:19.208 16:35:55 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:24.472 65536+0 records in 00:20:24.472 65536+0 records out 00:20:24.472 33554432 bytes (34 MB, 32 MiB) copied, 4.46243 s, 7.5 MB/s 00:20:24.472 16:36:00 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@51 -- # local i 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:24.472 [2024-07-11 16:36:00.688679] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:24.472 16:36:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.473 16:36:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:24.473 16:36:00 -- bdev/nbd_common.sh@41 -- # break 00:20:24.473 16:36:00 -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.473 16:36:00 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:24.473 [2024-07-11 16:36:01.004606] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:24.473 "name": "raid_bdev1", 00:20:24.473 "uuid": "421e8708-253d-4a34-997d-5a3b7ca34975", 00:20:24.473 "strip_size_kb": 0, 00:20:24.473 "state": "online", 00:20:24.473 "raid_level": "raid1", 00:20:24.473 "superblock": false, 00:20:24.473 "num_base_bdevs": 2, 00:20:24.473 "num_base_bdevs_discovered": 1, 00:20:24.473 "num_base_bdevs_operational": 1, 00:20:24.473 "base_bdevs_list": [ 00:20:24.473 { 00:20:24.473 "name": null, 00:20:24.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.473 "is_configured": false, 00:20:24.473 "data_offset": 0, 00:20:24.473 "data_size": 65536 00:20:24.473 }, 00:20:24.473 { 00:20:24.473 "name": "BaseBdev2", 00:20:24.473 "uuid": "ac7a5db8-772e-497c-8ef0-56b71e00308f", 00:20:24.473 "is_configured": true, 00:20:24.473 "data_offset": 0, 00:20:24.473 "data_size": 65536 00:20:24.473 } 00:20:24.473 ] 00:20:24.473 }' 00:20:24.473 16:36:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:24.473 16:36:01 -- common/autotest_common.sh@10 -- # set +x 00:20:25.407 16:36:01 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:25.407 [2024-07-11 16:36:02.024803] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:25.407 [2024-07-11 16:36:02.024851] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.407 [2024-07-11 16:36:02.036475] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b500 00:20:25.407 [2024-07-11 16:36:02.038144] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:25.407 16:36:02 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:26.350 16:36:03 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:26.350 16:36:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:26.350 16:36:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:26.350 16:36:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:26.350 16:36:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:26.350 16:36:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.350 16:36:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.607 16:36:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:26.607 "name": "raid_bdev1", 00:20:26.607 "uuid": "421e8708-253d-4a34-997d-5a3b7ca34975", 00:20:26.607 "strip_size_kb": 0, 00:20:26.607 "state": "online", 00:20:26.607 "raid_level": "raid1", 00:20:26.607 "superblock": false, 00:20:26.607 "num_base_bdevs": 2, 00:20:26.607 "num_base_bdevs_discovered": 2, 00:20:26.607 "num_base_bdevs_operational": 2, 00:20:26.607 "process": { 00:20:26.607 "type": "rebuild", 00:20:26.607 "target": "spare", 00:20:26.607 "progress": { 00:20:26.607 "blocks": 22528, 00:20:26.607 "percent": 34 00:20:26.607 } 00:20:26.607 }, 00:20:26.607 "base_bdevs_list": [ 00:20:26.607 { 00:20:26.607 "name": "spare", 00:20:26.607 "uuid": "db66ed45-b84a-5a7a-9824-bb5575b1b6a3", 00:20:26.607 "is_configured": true, 00:20:26.607 "data_offset": 0, 00:20:26.607 "data_size": 65536 00:20:26.607 }, 00:20:26.607 { 00:20:26.607 "name": "BaseBdev2", 00:20:26.607 "uuid": "ac7a5db8-772e-497c-8ef0-56b71e00308f", 00:20:26.607 "is_configured": true, 00:20:26.607 "data_offset": 0, 00:20:26.607 "data_size": 65536 00:20:26.607 } 00:20:26.607 ] 00:20:26.607 }' 00:20:26.607 16:36:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:26.607 16:36:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:26.607 16:36:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:26.607 16:36:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.607 16:36:03 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:26.866 [2024-07-11 16:36:03.520047] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:26.866 [2024-07-11 16:36:03.545899] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:26.866 [2024-07-11 16:36:03.545990] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.866 16:36:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.124 16:36:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:27.124 "name": "raid_bdev1", 00:20:27.124 "uuid": "421e8708-253d-4a34-997d-5a3b7ca34975", 00:20:27.124 "strip_size_kb": 0, 00:20:27.124 "state": "online", 00:20:27.124 "raid_level": "raid1", 00:20:27.124 "superblock": false, 00:20:27.124 "num_base_bdevs": 2, 00:20:27.124 "num_base_bdevs_discovered": 1, 00:20:27.124 "num_base_bdevs_operational": 1, 00:20:27.124 "base_bdevs_list": [ 00:20:27.124 { 00:20:27.124 "name": null, 00:20:27.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.124 "is_configured": false, 00:20:27.124 "data_offset": 0, 00:20:27.124 "data_size": 65536 00:20:27.124 }, 00:20:27.124 { 00:20:27.124 "name": "BaseBdev2", 00:20:27.124 "uuid": "ac7a5db8-772e-497c-8ef0-56b71e00308f", 00:20:27.124 "is_configured": true, 00:20:27.124 "data_offset": 0, 00:20:27.124 "data_size": 65536 00:20:27.124 } 00:20:27.124 ] 00:20:27.124 }' 00:20:27.124 16:36:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:27.124 16:36:03 -- common/autotest_common.sh@10 -- # set +x 00:20:27.690 16:36:04 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:27.690 16:36:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:27.690 16:36:04 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:27.690 16:36:04 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:27.690 16:36:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:27.690 16:36:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.690 16:36:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.948 16:36:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:27.948 "name": "raid_bdev1", 00:20:27.948 "uuid": "421e8708-253d-4a34-997d-5a3b7ca34975", 00:20:27.948 "strip_size_kb": 0, 00:20:27.948 "state": "online", 00:20:27.948 "raid_level": "raid1", 00:20:27.948 "superblock": false, 00:20:27.948 "num_base_bdevs": 2, 00:20:27.948 "num_base_bdevs_discovered": 1, 00:20:27.948 "num_base_bdevs_operational": 1, 00:20:27.948 "base_bdevs_list": [ 00:20:27.948 { 00:20:27.948 "name": null, 00:20:27.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.948 "is_configured": false, 00:20:27.948 "data_offset": 0, 00:20:27.948 "data_size": 65536 00:20:27.948 }, 00:20:27.948 { 00:20:27.948 "name": "BaseBdev2", 00:20:27.948 "uuid": "ac7a5db8-772e-497c-8ef0-56b71e00308f", 00:20:27.948 "is_configured": true, 00:20:27.948 "data_offset": 0, 00:20:27.948 "data_size": 65536 00:20:27.948 } 00:20:27.948 ] 00:20:27.948 }' 00:20:27.949 16:36:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:27.949 16:36:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:27.949 16:36:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:28.207 16:36:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:28.207 16:36:04 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:28.207 [2024-07-11 16:36:04.937840] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:28.207 [2024-07-11 16:36:04.937881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:28.207 [2024-07-11 16:36:04.949234] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:20:28.207 [2024-07-11 16:36:04.950885] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:28.207 16:36:04 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:29.580 16:36:05 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.580 16:36:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:29.580 16:36:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:29.580 16:36:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:29.580 16:36:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:29.580 16:36:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.580 16:36:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:29.580 "name": "raid_bdev1", 00:20:29.580 "uuid": "421e8708-253d-4a34-997d-5a3b7ca34975", 00:20:29.580 "strip_size_kb": 0, 00:20:29.580 "state": "online", 00:20:29.580 "raid_level": "raid1", 00:20:29.580 "superblock": false, 00:20:29.580 "num_base_bdevs": 2, 00:20:29.580 "num_base_bdevs_discovered": 2, 00:20:29.580 "num_base_bdevs_operational": 2, 00:20:29.580 "process": { 00:20:29.580 "type": "rebuild", 00:20:29.580 "target": "spare", 00:20:29.580 "progress": { 00:20:29.580 "blocks": 22528, 00:20:29.580 "percent": 34 00:20:29.580 } 00:20:29.580 }, 00:20:29.580 "base_bdevs_list": [ 00:20:29.580 { 00:20:29.580 "name": "spare", 00:20:29.580 "uuid": "db66ed45-b84a-5a7a-9824-bb5575b1b6a3", 00:20:29.580 "is_configured": true, 00:20:29.580 "data_offset": 0, 00:20:29.580 "data_size": 65536 00:20:29.580 }, 00:20:29.580 { 00:20:29.580 "name": "BaseBdev2", 00:20:29.580 "uuid": "ac7a5db8-772e-497c-8ef0-56b71e00308f", 00:20:29.580 "is_configured": true, 00:20:29.580 "data_offset": 0, 00:20:29.580 "data_size": 65536 00:20:29.580 } 00:20:29.580 ] 00:20:29.580 }' 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@657 -- # local timeout=382 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.580 16:36:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.837 16:36:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:29.837 "name": "raid_bdev1", 00:20:29.837 "uuid": "421e8708-253d-4a34-997d-5a3b7ca34975", 00:20:29.837 "strip_size_kb": 0, 00:20:29.837 "state": "online", 00:20:29.837 "raid_level": "raid1", 00:20:29.837 "superblock": false, 00:20:29.837 "num_base_bdevs": 2, 00:20:29.837 "num_base_bdevs_discovered": 2, 00:20:29.837 "num_base_bdevs_operational": 2, 00:20:29.837 "process": { 00:20:29.837 "type": "rebuild", 00:20:29.837 "target": "spare", 00:20:29.837 "progress": { 00:20:29.837 "blocks": 30720, 00:20:29.837 "percent": 46 00:20:29.837 } 00:20:29.837 }, 00:20:29.837 "base_bdevs_list": [ 00:20:29.837 { 00:20:29.837 "name": "spare", 00:20:29.837 "uuid": "db66ed45-b84a-5a7a-9824-bb5575b1b6a3", 00:20:29.837 "is_configured": true, 00:20:29.837 "data_offset": 0, 00:20:29.837 "data_size": 65536 00:20:29.837 }, 00:20:29.837 { 00:20:29.837 "name": "BaseBdev2", 00:20:29.837 "uuid": "ac7a5db8-772e-497c-8ef0-56b71e00308f", 00:20:29.837 "is_configured": true, 00:20:29.837 "data_offset": 0, 00:20:29.837 "data_size": 65536 00:20:29.837 } 00:20:29.837 ] 00:20:29.837 }' 00:20:29.837 16:36:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:29.837 16:36:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:29.837 16:36:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:29.837 16:36:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:29.837 16:36:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:31.210 "name": "raid_bdev1", 00:20:31.210 "uuid": "421e8708-253d-4a34-997d-5a3b7ca34975", 00:20:31.210 "strip_size_kb": 0, 00:20:31.210 "state": "online", 00:20:31.210 "raid_level": "raid1", 00:20:31.210 "superblock": false, 00:20:31.210 "num_base_bdevs": 2, 00:20:31.210 "num_base_bdevs_discovered": 2, 00:20:31.210 "num_base_bdevs_operational": 2, 00:20:31.210 "process": { 00:20:31.210 "type": "rebuild", 00:20:31.210 "target": "spare", 00:20:31.210 "progress": { 00:20:31.210 "blocks": 57344, 00:20:31.210 "percent": 87 00:20:31.210 } 00:20:31.210 }, 00:20:31.210 "base_bdevs_list": [ 00:20:31.210 { 00:20:31.210 "name": "spare", 00:20:31.210 "uuid": "db66ed45-b84a-5a7a-9824-bb5575b1b6a3", 00:20:31.210 "is_configured": true, 00:20:31.210 "data_offset": 0, 00:20:31.210 "data_size": 65536 00:20:31.210 }, 00:20:31.210 { 00:20:31.210 "name": "BaseBdev2", 00:20:31.210 "uuid": "ac7a5db8-772e-497c-8ef0-56b71e00308f", 00:20:31.210 "is_configured": true, 00:20:31.210 "data_offset": 0, 00:20:31.210 "data_size": 65536 00:20:31.210 } 00:20:31.210 ] 00:20:31.210 }' 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.210 16:36:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:31.469 [2024-07-11 16:36:08.167239] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:31.469 [2024-07-11 16:36:08.167313] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:31.469 [2024-07-11 16:36:08.167394] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.405 16:36:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:32.405 16:36:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:32.405 16:36:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:32.405 16:36:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:32.405 16:36:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:32.405 16:36:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:32.405 16:36:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.405 16:36:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.405 16:36:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:32.405 "name": "raid_bdev1", 00:20:32.405 "uuid": "421e8708-253d-4a34-997d-5a3b7ca34975", 00:20:32.405 "strip_size_kb": 0, 00:20:32.405 "state": "online", 00:20:32.405 "raid_level": "raid1", 00:20:32.405 "superblock": false, 00:20:32.405 "num_base_bdevs": 2, 00:20:32.405 "num_base_bdevs_discovered": 2, 00:20:32.405 "num_base_bdevs_operational": 2, 00:20:32.405 "base_bdevs_list": [ 00:20:32.405 { 00:20:32.405 "name": "spare", 00:20:32.405 "uuid": "db66ed45-b84a-5a7a-9824-bb5575b1b6a3", 00:20:32.405 "is_configured": true, 00:20:32.405 "data_offset": 0, 00:20:32.405 "data_size": 65536 00:20:32.405 }, 00:20:32.405 { 00:20:32.405 "name": "BaseBdev2", 00:20:32.405 "uuid": "ac7a5db8-772e-497c-8ef0-56b71e00308f", 00:20:32.405 "is_configured": true, 00:20:32.405 "data_offset": 0, 00:20:32.405 "data_size": 65536 00:20:32.405 } 00:20:32.405 ] 00:20:32.405 }' 00:20:32.405 16:36:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:32.662 16:36:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:32.662 16:36:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.662 16:36:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:32.662 16:36:09 -- bdev/bdev_raid.sh@660 -- # break 00:20:32.662 16:36:09 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.662 16:36:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:32.662 16:36:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:32.662 16:36:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:32.662 16:36:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:32.662 16:36:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.662 16:36:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:32.921 "name": "raid_bdev1", 00:20:32.921 "uuid": "421e8708-253d-4a34-997d-5a3b7ca34975", 00:20:32.921 "strip_size_kb": 0, 00:20:32.921 "state": "online", 00:20:32.921 "raid_level": "raid1", 00:20:32.921 "superblock": false, 00:20:32.921 "num_base_bdevs": 2, 00:20:32.921 "num_base_bdevs_discovered": 2, 00:20:32.921 "num_base_bdevs_operational": 2, 00:20:32.921 "base_bdevs_list": [ 00:20:32.921 { 00:20:32.921 "name": "spare", 00:20:32.921 "uuid": "db66ed45-b84a-5a7a-9824-bb5575b1b6a3", 00:20:32.921 "is_configured": true, 00:20:32.921 "data_offset": 0, 00:20:32.921 "data_size": 65536 00:20:32.921 }, 00:20:32.921 { 00:20:32.921 "name": "BaseBdev2", 00:20:32.921 "uuid": "ac7a5db8-772e-497c-8ef0-56b71e00308f", 00:20:32.921 "is_configured": true, 00:20:32.921 "data_offset": 0, 00:20:32.921 "data_size": 65536 00:20:32.921 } 00:20:32.921 ] 00:20:32.921 }' 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.921 16:36:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.179 16:36:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:33.179 "name": "raid_bdev1", 00:20:33.179 "uuid": "421e8708-253d-4a34-997d-5a3b7ca34975", 00:20:33.179 "strip_size_kb": 0, 00:20:33.179 "state": "online", 00:20:33.179 "raid_level": "raid1", 00:20:33.179 "superblock": false, 00:20:33.179 "num_base_bdevs": 2, 00:20:33.179 "num_base_bdevs_discovered": 2, 00:20:33.179 "num_base_bdevs_operational": 2, 00:20:33.179 "base_bdevs_list": [ 00:20:33.179 { 00:20:33.179 "name": "spare", 00:20:33.179 "uuid": "db66ed45-b84a-5a7a-9824-bb5575b1b6a3", 00:20:33.179 "is_configured": true, 00:20:33.179 "data_offset": 0, 00:20:33.179 "data_size": 65536 00:20:33.179 }, 00:20:33.179 { 00:20:33.179 "name": "BaseBdev2", 00:20:33.179 "uuid": "ac7a5db8-772e-497c-8ef0-56b71e00308f", 00:20:33.179 "is_configured": true, 00:20:33.179 "data_offset": 0, 00:20:33.179 "data_size": 65536 00:20:33.179 } 00:20:33.179 ] 00:20:33.179 }' 00:20:33.179 16:36:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:33.179 16:36:09 -- common/autotest_common.sh@10 -- # set +x 00:20:33.745 16:36:10 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:34.003 [2024-07-11 16:36:10.704945] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:34.003 [2024-07-11 16:36:10.704985] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:34.003 [2024-07-11 16:36:10.705122] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:34.003 [2024-07-11 16:36:10.705206] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:34.003 [2024-07-11 16:36:10.705218] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:20:34.003 16:36:10 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.003 16:36:10 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:34.262 16:36:10 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:34.262 16:36:10 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:34.262 16:36:10 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:34.262 16:36:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:34.262 16:36:10 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:34.262 16:36:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:34.262 16:36:10 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:34.262 16:36:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:34.262 16:36:10 -- bdev/nbd_common.sh@12 -- # local i 00:20:34.262 16:36:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:34.262 16:36:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:34.262 16:36:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:34.549 /dev/nbd0 00:20:34.549 16:36:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:34.549 16:36:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:34.549 16:36:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:34.549 16:36:11 -- common/autotest_common.sh@857 -- # local i 00:20:34.549 16:36:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:34.549 16:36:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:34.549 16:36:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:34.549 16:36:11 -- common/autotest_common.sh@861 -- # break 00:20:34.549 16:36:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:34.549 16:36:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:34.549 16:36:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.549 1+0 records in 00:20:34.549 1+0 records out 00:20:34.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000579291 s, 7.1 MB/s 00:20:34.549 16:36:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.549 16:36:11 -- common/autotest_common.sh@874 -- # size=4096 00:20:34.549 16:36:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.549 16:36:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:34.549 16:36:11 -- common/autotest_common.sh@877 -- # return 0 00:20:34.549 16:36:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.549 16:36:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:34.549 16:36:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:34.808 /dev/nbd1 00:20:34.808 16:36:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:34.808 16:36:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:34.808 16:36:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:34.808 16:36:11 -- common/autotest_common.sh@857 -- # local i 00:20:34.808 16:36:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:34.808 16:36:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:34.808 16:36:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:34.808 16:36:11 -- common/autotest_common.sh@861 -- # break 00:20:34.808 16:36:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:34.808 16:36:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:34.808 16:36:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:34.808 1+0 records in 00:20:34.808 1+0 records out 00:20:34.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00208342 s, 2.0 MB/s 00:20:34.808 16:36:11 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.808 16:36:11 -- common/autotest_common.sh@874 -- # size=4096 00:20:34.808 16:36:11 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:34.808 16:36:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:34.808 16:36:11 -- common/autotest_common.sh@877 -- # return 0 00:20:34.808 16:36:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:34.808 16:36:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:34.808 16:36:11 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:35.066 16:36:11 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@51 -- # local i 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:35.066 16:36:11 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:35.325 16:36:11 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:35.325 16:36:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.325 16:36:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:35.325 16:36:11 -- bdev/nbd_common.sh@41 -- # break 00:20:35.325 16:36:11 -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.325 16:36:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:35.325 16:36:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:35.584 16:36:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:35.585 16:36:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:35.585 16:36:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:35.585 16:36:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:35.585 16:36:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.585 16:36:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:35.585 16:36:12 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:35.585 16:36:12 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:35.585 16:36:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:35.585 16:36:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:35.585 16:36:12 -- bdev/nbd_common.sh@41 -- # break 00:20:35.585 16:36:12 -- bdev/nbd_common.sh@45 -- # return 0 00:20:35.585 16:36:12 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:35.585 16:36:12 -- bdev/bdev_raid.sh@709 -- # killprocess 125513 00:20:35.585 16:36:12 -- common/autotest_common.sh@926 -- # '[' -z 125513 ']' 00:20:35.585 16:36:12 -- common/autotest_common.sh@930 -- # kill -0 125513 00:20:35.585 16:36:12 -- common/autotest_common.sh@931 -- # uname 00:20:35.585 16:36:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:35.585 16:36:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 125513 00:20:35.585 16:36:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:35.585 killing process with pid 125513 00:20:35.585 16:36:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:35.585 16:36:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 125513' 00:20:35.585 Received shutdown signal, test time was about 60.000000 seconds 00:20:35.585 00:20:35.585 Latency(us) 00:20:35.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.585 =================================================================================================================== 00:20:35.585 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.585 16:36:12 -- common/autotest_common.sh@945 -- # kill 125513 00:20:35.585 16:36:12 -- common/autotest_common.sh@950 -- # wait 125513 00:20:35.585 [2024-07-11 16:36:12.359095] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:35.844 [2024-07-11 16:36:12.552895] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:36.778 16:36:13 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:36.778 00:20:36.779 real 0m21.050s 00:20:36.779 user 0m29.147s 00:20:36.779 sys 0m3.462s 00:20:36.779 16:36:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:36.779 ************************************ 00:20:36.779 16:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:36.779 END TEST raid_rebuild_test 00:20:36.779 ************************************ 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:20:36.779 16:36:13 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:20:36.779 16:36:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:36.779 16:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:36.779 ************************************ 00:20:36.779 START TEST raid_rebuild_test_sb 00:20:36.779 ************************************ 00:20:36.779 16:36:13 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@544 -- # raid_pid=126082 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126082 /var/tmp/spdk-raid.sock 00:20:36.779 16:36:13 -- common/autotest_common.sh@819 -- # '[' -z 126082 ']' 00:20:36.779 16:36:13 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:36.779 16:36:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:36.779 16:36:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:36.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:36.779 16:36:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:36.779 16:36:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:36.779 16:36:13 -- common/autotest_common.sh@10 -- # set +x 00:20:36.779 [2024-07-11 16:36:13.583725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:36.779 [2024-07-11 16:36:13.583915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126082 ] 00:20:36.779 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:36.779 Zero copy mechanism will not be used. 00:20:37.036 [2024-07-11 16:36:13.750109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.295 [2024-07-11 16:36:13.905647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.295 [2024-07-11 16:36:14.068253] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:37.862 16:36:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:37.862 16:36:14 -- common/autotest_common.sh@852 -- # return 0 00:20:37.862 16:36:14 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:37.862 16:36:14 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:37.862 16:36:14 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:37.862 BaseBdev1_malloc 00:20:37.862 16:36:14 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:38.134 [2024-07-11 16:36:14.878379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:38.134 [2024-07-11 16:36:14.878478] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.134 [2024-07-11 16:36:14.878510] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:38.134 [2024-07-11 16:36:14.878599] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.134 [2024-07-11 16:36:14.880668] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.134 [2024-07-11 16:36:14.880714] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:38.134 BaseBdev1 00:20:38.134 16:36:14 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:38.134 16:36:14 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:38.134 16:36:14 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:38.404 BaseBdev2_malloc 00:20:38.404 16:36:15 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:38.664 [2024-07-11 16:36:15.294125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:38.664 [2024-07-11 16:36:15.294210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:38.664 [2024-07-11 16:36:15.294249] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:38.664 [2024-07-11 16:36:15.294298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:38.664 [2024-07-11 16:36:15.296206] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:38.664 [2024-07-11 16:36:15.296275] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:38.664 BaseBdev2 00:20:38.664 16:36:15 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:38.923 spare_malloc 00:20:38.923 16:36:15 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:38.923 spare_delay 00:20:38.923 16:36:15 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:39.182 [2024-07-11 16:36:15.862541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:39.182 [2024-07-11 16:36:15.862637] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.182 [2024-07-11 16:36:15.862690] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:39.182 [2024-07-11 16:36:15.862728] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.182 [2024-07-11 16:36:15.864663] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.182 [2024-07-11 16:36:15.864731] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:39.182 spare 00:20:39.182 16:36:15 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:39.441 [2024-07-11 16:36:16.050633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:39.441 [2024-07-11 16:36:16.052264] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:39.441 [2024-07-11 16:36:16.052507] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:20:39.441 [2024-07-11 16:36:16.052530] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:39.441 [2024-07-11 16:36:16.052649] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:39.441 [2024-07-11 16:36:16.053007] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:20:39.441 [2024-07-11 16:36:16.053047] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:20:39.441 [2024-07-11 16:36:16.053215] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.441 16:36:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.700 16:36:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:39.700 "name": "raid_bdev1", 00:20:39.700 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:39.700 "strip_size_kb": 0, 00:20:39.700 "state": "online", 00:20:39.700 "raid_level": "raid1", 00:20:39.700 "superblock": true, 00:20:39.700 "num_base_bdevs": 2, 00:20:39.700 "num_base_bdevs_discovered": 2, 00:20:39.700 "num_base_bdevs_operational": 2, 00:20:39.700 "base_bdevs_list": [ 00:20:39.700 { 00:20:39.700 "name": "BaseBdev1", 00:20:39.700 "uuid": "dd87c3a9-3f5a-5e46-8f93-6fc9cf27c6bc", 00:20:39.700 "is_configured": true, 00:20:39.700 "data_offset": 2048, 00:20:39.700 "data_size": 63488 00:20:39.700 }, 00:20:39.700 { 00:20:39.700 "name": "BaseBdev2", 00:20:39.700 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:39.700 "is_configured": true, 00:20:39.700 "data_offset": 2048, 00:20:39.700 "data_size": 63488 00:20:39.700 } 00:20:39.700 ] 00:20:39.700 }' 00:20:39.700 16:36:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:39.700 16:36:16 -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 16:36:16 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:40.268 16:36:16 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:40.527 [2024-07-11 16:36:17.102948] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:40.527 16:36:17 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:40.527 16:36:17 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.527 16:36:17 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:40.785 16:36:17 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:40.785 16:36:17 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:40.785 16:36:17 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:40.785 16:36:17 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:40.785 16:36:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:40.785 16:36:17 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:40.785 16:36:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:40.785 16:36:17 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:40.785 16:36:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:40.785 16:36:17 -- bdev/nbd_common.sh@12 -- # local i 00:20:40.785 16:36:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:40.785 16:36:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:40.785 16:36:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:40.785 [2024-07-11 16:36:17.574871] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:41.044 /dev/nbd0 00:20:41.044 16:36:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:41.044 16:36:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:41.044 16:36:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:41.044 16:36:17 -- common/autotest_common.sh@857 -- # local i 00:20:41.044 16:36:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:41.044 16:36:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:41.044 16:36:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:41.044 16:36:17 -- common/autotest_common.sh@861 -- # break 00:20:41.044 16:36:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:41.044 16:36:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:41.044 16:36:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:41.044 1+0 records in 00:20:41.044 1+0 records out 00:20:41.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351262 s, 11.7 MB/s 00:20:41.044 16:36:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.044 16:36:17 -- common/autotest_common.sh@874 -- # size=4096 00:20:41.044 16:36:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:41.044 16:36:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:41.044 16:36:17 -- common/autotest_common.sh@877 -- # return 0 00:20:41.044 16:36:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:41.044 16:36:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:41.044 16:36:17 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:41.044 16:36:17 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:41.044 16:36:17 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:46.312 63488+0 records in 00:20:46.312 63488+0 records out 00:20:46.312 32505856 bytes (33 MB, 31 MiB) copied, 4.83239 s, 6.7 MB/s 00:20:46.312 16:36:22 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@51 -- # local i 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:46.312 [2024-07-11 16:36:22.710889] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@41 -- # break 00:20:46.312 16:36:22 -- bdev/nbd_common.sh@45 -- # return 0 00:20:46.312 16:36:22 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:46.312 [2024-07-11 16:36:23.062512] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.312 16:36:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:46.569 16:36:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:46.569 "name": "raid_bdev1", 00:20:46.569 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:46.569 "strip_size_kb": 0, 00:20:46.569 "state": "online", 00:20:46.569 "raid_level": "raid1", 00:20:46.569 "superblock": true, 00:20:46.569 "num_base_bdevs": 2, 00:20:46.569 "num_base_bdevs_discovered": 1, 00:20:46.569 "num_base_bdevs_operational": 1, 00:20:46.569 "base_bdevs_list": [ 00:20:46.569 { 00:20:46.569 "name": null, 00:20:46.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.569 "is_configured": false, 00:20:46.569 "data_offset": 2048, 00:20:46.569 "data_size": 63488 00:20:46.569 }, 00:20:46.569 { 00:20:46.569 "name": "BaseBdev2", 00:20:46.569 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:46.569 "is_configured": true, 00:20:46.569 "data_offset": 2048, 00:20:46.569 "data_size": 63488 00:20:46.569 } 00:20:46.569 ] 00:20:46.569 }' 00:20:46.569 16:36:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:46.569 16:36:23 -- common/autotest_common.sh@10 -- # set +x 00:20:47.135 16:36:23 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:47.394 [2024-07-11 16:36:24.054687] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:47.394 [2024-07-11 16:36:24.054737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:47.394 [2024-07-11 16:36:24.066425] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4e30 00:20:47.394 [2024-07-11 16:36:24.068120] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:47.394 16:36:24 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:48.331 16:36:25 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.331 16:36:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:48.331 16:36:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:48.331 16:36:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:48.331 16:36:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:48.331 16:36:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.331 16:36:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.589 16:36:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:48.589 "name": "raid_bdev1", 00:20:48.589 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:48.589 "strip_size_kb": 0, 00:20:48.589 "state": "online", 00:20:48.589 "raid_level": "raid1", 00:20:48.589 "superblock": true, 00:20:48.589 "num_base_bdevs": 2, 00:20:48.589 "num_base_bdevs_discovered": 2, 00:20:48.589 "num_base_bdevs_operational": 2, 00:20:48.589 "process": { 00:20:48.589 "type": "rebuild", 00:20:48.589 "target": "spare", 00:20:48.589 "progress": { 00:20:48.589 "blocks": 24576, 00:20:48.589 "percent": 38 00:20:48.589 } 00:20:48.589 }, 00:20:48.589 "base_bdevs_list": [ 00:20:48.589 { 00:20:48.589 "name": "spare", 00:20:48.589 "uuid": "ead885a6-d5fe-55e8-b666-e7b2f5578b6f", 00:20:48.589 "is_configured": true, 00:20:48.589 "data_offset": 2048, 00:20:48.589 "data_size": 63488 00:20:48.589 }, 00:20:48.589 { 00:20:48.589 "name": "BaseBdev2", 00:20:48.589 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:48.589 "is_configured": true, 00:20:48.589 "data_offset": 2048, 00:20:48.589 "data_size": 63488 00:20:48.589 } 00:20:48.589 ] 00:20:48.589 }' 00:20:48.589 16:36:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:48.589 16:36:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.589 16:36:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:48.848 16:36:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.848 16:36:25 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:48.848 [2024-07-11 16:36:25.626366] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:49.107 [2024-07-11 16:36:25.676654] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:49.107 [2024-07-11 16:36:25.676753] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:49.107 "name": "raid_bdev1", 00:20:49.107 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:49.107 "strip_size_kb": 0, 00:20:49.107 "state": "online", 00:20:49.107 "raid_level": "raid1", 00:20:49.107 "superblock": true, 00:20:49.107 "num_base_bdevs": 2, 00:20:49.107 "num_base_bdevs_discovered": 1, 00:20:49.107 "num_base_bdevs_operational": 1, 00:20:49.107 "base_bdevs_list": [ 00:20:49.107 { 00:20:49.107 "name": null, 00:20:49.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.107 "is_configured": false, 00:20:49.107 "data_offset": 2048, 00:20:49.107 "data_size": 63488 00:20:49.107 }, 00:20:49.107 { 00:20:49.107 "name": "BaseBdev2", 00:20:49.107 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:49.107 "is_configured": true, 00:20:49.107 "data_offset": 2048, 00:20:49.107 "data_size": 63488 00:20:49.107 } 00:20:49.107 ] 00:20:49.107 }' 00:20:49.107 16:36:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:49.107 16:36:25 -- common/autotest_common.sh@10 -- # set +x 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:50.042 "name": "raid_bdev1", 00:20:50.042 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:50.042 "strip_size_kb": 0, 00:20:50.042 "state": "online", 00:20:50.042 "raid_level": "raid1", 00:20:50.042 "superblock": true, 00:20:50.042 "num_base_bdevs": 2, 00:20:50.042 "num_base_bdevs_discovered": 1, 00:20:50.042 "num_base_bdevs_operational": 1, 00:20:50.042 "base_bdevs_list": [ 00:20:50.042 { 00:20:50.042 "name": null, 00:20:50.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.042 "is_configured": false, 00:20:50.042 "data_offset": 2048, 00:20:50.042 "data_size": 63488 00:20:50.042 }, 00:20:50.042 { 00:20:50.042 "name": "BaseBdev2", 00:20:50.042 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:50.042 "is_configured": true, 00:20:50.042 "data_offset": 2048, 00:20:50.042 "data_size": 63488 00:20:50.042 } 00:20:50.042 ] 00:20:50.042 }' 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:50.042 16:36:26 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:50.300 [2024-07-11 16:36:26.957834] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:50.300 [2024-07-11 16:36:26.957876] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:50.300 [2024-07-11 16:36:26.969232] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4fd0 00:20:50.300 [2024-07-11 16:36:26.970927] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.300 16:36:26 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:51.235 16:36:27 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.235 16:36:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.235 16:36:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:51.235 16:36:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:51.235 16:36:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.235 16:36:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.235 16:36:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.493 16:36:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:51.493 "name": "raid_bdev1", 00:20:51.493 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:51.493 "strip_size_kb": 0, 00:20:51.493 "state": "online", 00:20:51.493 "raid_level": "raid1", 00:20:51.493 "superblock": true, 00:20:51.493 "num_base_bdevs": 2, 00:20:51.493 "num_base_bdevs_discovered": 2, 00:20:51.493 "num_base_bdevs_operational": 2, 00:20:51.493 "process": { 00:20:51.493 "type": "rebuild", 00:20:51.493 "target": "spare", 00:20:51.493 "progress": { 00:20:51.493 "blocks": 24576, 00:20:51.493 "percent": 38 00:20:51.493 } 00:20:51.493 }, 00:20:51.493 "base_bdevs_list": [ 00:20:51.493 { 00:20:51.493 "name": "spare", 00:20:51.493 "uuid": "ead885a6-d5fe-55e8-b666-e7b2f5578b6f", 00:20:51.493 "is_configured": true, 00:20:51.493 "data_offset": 2048, 00:20:51.493 "data_size": 63488 00:20:51.493 }, 00:20:51.493 { 00:20:51.493 "name": "BaseBdev2", 00:20:51.493 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:51.493 "is_configured": true, 00:20:51.493 "data_offset": 2048, 00:20:51.493 "data_size": 63488 00:20:51.493 } 00:20:51.493 ] 00:20:51.493 }' 00:20:51.493 16:36:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:51.493 16:36:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:51.493 16:36:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:51.750 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@657 -- # local timeout=404 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.750 16:36:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.008 16:36:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:52.008 "name": "raid_bdev1", 00:20:52.008 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:52.008 "strip_size_kb": 0, 00:20:52.008 "state": "online", 00:20:52.008 "raid_level": "raid1", 00:20:52.008 "superblock": true, 00:20:52.008 "num_base_bdevs": 2, 00:20:52.008 "num_base_bdevs_discovered": 2, 00:20:52.008 "num_base_bdevs_operational": 2, 00:20:52.008 "process": { 00:20:52.008 "type": "rebuild", 00:20:52.008 "target": "spare", 00:20:52.008 "progress": { 00:20:52.008 "blocks": 30720, 00:20:52.008 "percent": 48 00:20:52.008 } 00:20:52.008 }, 00:20:52.008 "base_bdevs_list": [ 00:20:52.008 { 00:20:52.008 "name": "spare", 00:20:52.008 "uuid": "ead885a6-d5fe-55e8-b666-e7b2f5578b6f", 00:20:52.008 "is_configured": true, 00:20:52.008 "data_offset": 2048, 00:20:52.008 "data_size": 63488 00:20:52.008 }, 00:20:52.008 { 00:20:52.008 "name": "BaseBdev2", 00:20:52.008 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:52.008 "is_configured": true, 00:20:52.008 "data_offset": 2048, 00:20:52.008 "data_size": 63488 00:20:52.008 } 00:20:52.008 ] 00:20:52.008 }' 00:20:52.008 16:36:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:52.008 16:36:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.008 16:36:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:52.008 16:36:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.008 16:36:28 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:52.939 16:36:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:52.939 16:36:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.939 16:36:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:52.939 16:36:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:52.939 16:36:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:52.939 16:36:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:52.939 16:36:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.939 16:36:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.196 16:36:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:53.196 "name": "raid_bdev1", 00:20:53.196 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:53.196 "strip_size_kb": 0, 00:20:53.196 "state": "online", 00:20:53.196 "raid_level": "raid1", 00:20:53.196 "superblock": true, 00:20:53.196 "num_base_bdevs": 2, 00:20:53.196 "num_base_bdevs_discovered": 2, 00:20:53.196 "num_base_bdevs_operational": 2, 00:20:53.196 "process": { 00:20:53.196 "type": "rebuild", 00:20:53.196 "target": "spare", 00:20:53.196 "progress": { 00:20:53.196 "blocks": 57344, 00:20:53.196 "percent": 90 00:20:53.196 } 00:20:53.196 }, 00:20:53.196 "base_bdevs_list": [ 00:20:53.196 { 00:20:53.196 "name": "spare", 00:20:53.196 "uuid": "ead885a6-d5fe-55e8-b666-e7b2f5578b6f", 00:20:53.196 "is_configured": true, 00:20:53.196 "data_offset": 2048, 00:20:53.196 "data_size": 63488 00:20:53.196 }, 00:20:53.196 { 00:20:53.196 "name": "BaseBdev2", 00:20:53.196 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:53.196 "is_configured": true, 00:20:53.196 "data_offset": 2048, 00:20:53.196 "data_size": 63488 00:20:53.196 } 00:20:53.196 ] 00:20:53.196 }' 00:20:53.196 16:36:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:53.196 16:36:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.196 16:36:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:53.466 16:36:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.466 16:36:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:53.466 [2024-07-11 16:36:30.087045] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:53.466 [2024-07-11 16:36:30.087129] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:53.466 [2024-07-11 16:36:30.087267] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.413 16:36:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:54.413 16:36:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.413 16:36:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:54.413 16:36:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:54.413 16:36:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:54.413 16:36:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:54.413 16:36:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.413 16:36:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:54.672 "name": "raid_bdev1", 00:20:54.672 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:54.672 "strip_size_kb": 0, 00:20:54.672 "state": "online", 00:20:54.672 "raid_level": "raid1", 00:20:54.672 "superblock": true, 00:20:54.672 "num_base_bdevs": 2, 00:20:54.672 "num_base_bdevs_discovered": 2, 00:20:54.672 "num_base_bdevs_operational": 2, 00:20:54.672 "base_bdevs_list": [ 00:20:54.672 { 00:20:54.672 "name": "spare", 00:20:54.672 "uuid": "ead885a6-d5fe-55e8-b666-e7b2f5578b6f", 00:20:54.672 "is_configured": true, 00:20:54.672 "data_offset": 2048, 00:20:54.672 "data_size": 63488 00:20:54.672 }, 00:20:54.672 { 00:20:54.672 "name": "BaseBdev2", 00:20:54.672 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:54.672 "is_configured": true, 00:20:54.672 "data_offset": 2048, 00:20:54.672 "data_size": 63488 00:20:54.672 } 00:20:54.672 ] 00:20:54.672 }' 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@660 -- # break 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.672 16:36:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:54.930 "name": "raid_bdev1", 00:20:54.930 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:54.930 "strip_size_kb": 0, 00:20:54.930 "state": "online", 00:20:54.930 "raid_level": "raid1", 00:20:54.930 "superblock": true, 00:20:54.930 "num_base_bdevs": 2, 00:20:54.930 "num_base_bdevs_discovered": 2, 00:20:54.930 "num_base_bdevs_operational": 2, 00:20:54.930 "base_bdevs_list": [ 00:20:54.930 { 00:20:54.930 "name": "spare", 00:20:54.930 "uuid": "ead885a6-d5fe-55e8-b666-e7b2f5578b6f", 00:20:54.930 "is_configured": true, 00:20:54.930 "data_offset": 2048, 00:20:54.930 "data_size": 63488 00:20:54.930 }, 00:20:54.930 { 00:20:54.930 "name": "BaseBdev2", 00:20:54.930 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:54.930 "is_configured": true, 00:20:54.930 "data_offset": 2048, 00:20:54.930 "data_size": 63488 00:20:54.930 } 00:20:54.930 ] 00:20:54.930 }' 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.930 16:36:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.188 16:36:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:55.189 "name": "raid_bdev1", 00:20:55.189 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:55.189 "strip_size_kb": 0, 00:20:55.189 "state": "online", 00:20:55.189 "raid_level": "raid1", 00:20:55.189 "superblock": true, 00:20:55.189 "num_base_bdevs": 2, 00:20:55.189 "num_base_bdevs_discovered": 2, 00:20:55.189 "num_base_bdevs_operational": 2, 00:20:55.189 "base_bdevs_list": [ 00:20:55.189 { 00:20:55.189 "name": "spare", 00:20:55.189 "uuid": "ead885a6-d5fe-55e8-b666-e7b2f5578b6f", 00:20:55.189 "is_configured": true, 00:20:55.189 "data_offset": 2048, 00:20:55.189 "data_size": 63488 00:20:55.189 }, 00:20:55.189 { 00:20:55.189 "name": "BaseBdev2", 00:20:55.189 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:55.189 "is_configured": true, 00:20:55.189 "data_offset": 2048, 00:20:55.189 "data_size": 63488 00:20:55.189 } 00:20:55.189 ] 00:20:55.189 }' 00:20:55.189 16:36:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:55.189 16:36:31 -- common/autotest_common.sh@10 -- # set +x 00:20:55.755 16:36:32 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:56.013 [2024-07-11 16:36:32.765270] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:56.013 [2024-07-11 16:36:32.765304] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.013 [2024-07-11 16:36:32.765388] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.013 [2024-07-11 16:36:32.765456] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.013 [2024-07-11 16:36:32.765467] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:20:56.013 16:36:32 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.013 16:36:32 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:56.271 16:36:32 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:56.271 16:36:32 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:56.271 16:36:32 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:56.271 16:36:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:56.271 16:36:32 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:56.271 16:36:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:56.271 16:36:32 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:56.271 16:36:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:56.271 16:36:32 -- bdev/nbd_common.sh@12 -- # local i 00:20:56.271 16:36:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:56.271 16:36:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:56.271 16:36:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:56.528 /dev/nbd0 00:20:56.528 16:36:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:56.528 16:36:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:56.528 16:36:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:20:56.528 16:36:33 -- common/autotest_common.sh@857 -- # local i 00:20:56.528 16:36:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:56.528 16:36:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:56.528 16:36:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:20:56.528 16:36:33 -- common/autotest_common.sh@861 -- # break 00:20:56.528 16:36:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:56.528 16:36:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:56.528 16:36:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:56.528 1+0 records in 00:20:56.528 1+0 records out 00:20:56.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494377 s, 8.3 MB/s 00:20:56.528 16:36:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.528 16:36:33 -- common/autotest_common.sh@874 -- # size=4096 00:20:56.528 16:36:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.528 16:36:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:56.528 16:36:33 -- common/autotest_common.sh@877 -- # return 0 00:20:56.528 16:36:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:56.528 16:36:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:56.528 16:36:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:56.785 /dev/nbd1 00:20:56.785 16:36:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:56.785 16:36:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:56.785 16:36:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:20:56.785 16:36:33 -- common/autotest_common.sh@857 -- # local i 00:20:56.785 16:36:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:20:56.785 16:36:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:20:56.785 16:36:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:20:56.785 16:36:33 -- common/autotest_common.sh@861 -- # break 00:20:56.785 16:36:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:20:56.785 16:36:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:20:56.785 16:36:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:56.785 1+0 records in 00:20:56.785 1+0 records out 00:20:56.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044249 s, 9.3 MB/s 00:20:56.785 16:36:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.785 16:36:33 -- common/autotest_common.sh@874 -- # size=4096 00:20:56.785 16:36:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:56.785 16:36:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:20:56.785 16:36:33 -- common/autotest_common.sh@877 -- # return 0 00:20:56.785 16:36:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:56.785 16:36:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:56.785 16:36:33 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:57.042 16:36:33 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:57.042 16:36:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:57.042 16:36:33 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:57.042 16:36:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:57.042 16:36:33 -- bdev/nbd_common.sh@51 -- # local i 00:20:57.042 16:36:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:57.042 16:36:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@41 -- # break 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@45 -- # return 0 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:57.300 16:36:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@41 -- # break 00:20:57.561 16:36:34 -- bdev/nbd_common.sh@45 -- # return 0 00:20:57.561 16:36:34 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:57.561 16:36:34 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:57.561 16:36:34 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:57.561 16:36:34 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:57.819 16:36:34 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:58.077 [2024-07-11 16:36:34.714883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:58.077 [2024-07-11 16:36:34.714953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.077 [2024-07-11 16:36:34.714985] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:58.077 [2024-07-11 16:36:34.715007] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.077 [2024-07-11 16:36:34.717030] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.077 [2024-07-11 16:36:34.717096] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:58.077 [2024-07-11 16:36:34.717191] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:58.077 [2024-07-11 16:36:34.717249] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:58.077 BaseBdev1 00:20:58.077 16:36:34 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:58.077 16:36:34 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:58.077 16:36:34 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:58.336 16:36:34 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:58.336 [2024-07-11 16:36:35.130970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:58.336 [2024-07-11 16:36:35.131036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.336 [2024-07-11 16:36:35.131064] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:58.336 [2024-07-11 16:36:35.131087] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.336 [2024-07-11 16:36:35.131448] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.336 [2024-07-11 16:36:35.131500] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:58.336 [2024-07-11 16:36:35.131638] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:58.336 [2024-07-11 16:36:35.131653] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:58.336 [2024-07-11 16:36:35.131660] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:58.336 [2024-07-11 16:36:35.131684] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:20:58.336 [2024-07-11 16:36:35.131750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:58.336 BaseBdev2 00:20:58.336 16:36:35 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:58.594 16:36:35 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:58.853 [2024-07-11 16:36:35.487018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:58.853 [2024-07-11 16:36:35.487068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:58.853 [2024-07-11 16:36:35.487097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:58.853 [2024-07-11 16:36:35.487116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:58.853 [2024-07-11 16:36:35.487472] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:58.853 [2024-07-11 16:36:35.487564] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:58.853 [2024-07-11 16:36:35.487669] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:58.853 [2024-07-11 16:36:35.487703] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:58.853 spare 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.853 16:36:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.853 [2024-07-11 16:36:35.587799] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:20:58.853 [2024-07-11 16:36:35.587823] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:58.853 [2024-07-11 16:36:35.587942] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5b10 00:20:58.853 [2024-07-11 16:36:35.588333] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:20:58.853 [2024-07-11 16:36:35.588358] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:20:58.853 [2024-07-11 16:36:35.588525] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:59.112 16:36:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.112 "name": "raid_bdev1", 00:20:59.112 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:59.112 "strip_size_kb": 0, 00:20:59.112 "state": "online", 00:20:59.112 "raid_level": "raid1", 00:20:59.112 "superblock": true, 00:20:59.112 "num_base_bdevs": 2, 00:20:59.112 "num_base_bdevs_discovered": 2, 00:20:59.112 "num_base_bdevs_operational": 2, 00:20:59.112 "base_bdevs_list": [ 00:20:59.112 { 00:20:59.112 "name": "spare", 00:20:59.112 "uuid": "ead885a6-d5fe-55e8-b666-e7b2f5578b6f", 00:20:59.112 "is_configured": true, 00:20:59.112 "data_offset": 2048, 00:20:59.112 "data_size": 63488 00:20:59.112 }, 00:20:59.112 { 00:20:59.112 "name": "BaseBdev2", 00:20:59.112 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:59.112 "is_configured": true, 00:20:59.112 "data_offset": 2048, 00:20:59.112 "data_size": 63488 00:20:59.112 } 00:20:59.112 ] 00:20:59.112 }' 00:20:59.112 16:36:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.112 16:36:35 -- common/autotest_common.sh@10 -- # set +x 00:20:59.679 16:36:36 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:59.679 16:36:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:59.679 16:36:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:59.679 16:36:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:59.679 16:36:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:59.679 16:36:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.679 16:36:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.938 16:36:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:59.938 "name": "raid_bdev1", 00:20:59.938 "uuid": "ce676e2a-ab2e-4a19-9eb8-cf9d5f1821ba", 00:20:59.938 "strip_size_kb": 0, 00:20:59.938 "state": "online", 00:20:59.938 "raid_level": "raid1", 00:20:59.938 "superblock": true, 00:20:59.938 "num_base_bdevs": 2, 00:20:59.938 "num_base_bdevs_discovered": 2, 00:20:59.938 "num_base_bdevs_operational": 2, 00:20:59.938 "base_bdevs_list": [ 00:20:59.938 { 00:20:59.938 "name": "spare", 00:20:59.938 "uuid": "ead885a6-d5fe-55e8-b666-e7b2f5578b6f", 00:20:59.938 "is_configured": true, 00:20:59.938 "data_offset": 2048, 00:20:59.938 "data_size": 63488 00:20:59.938 }, 00:20:59.938 { 00:20:59.938 "name": "BaseBdev2", 00:20:59.938 "uuid": "b9061aca-70df-5588-9772-143b9b803cca", 00:20:59.938 "is_configured": true, 00:20:59.938 "data_offset": 2048, 00:20:59.938 "data_size": 63488 00:20:59.938 } 00:20:59.938 ] 00:20:59.938 }' 00:20:59.938 16:36:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:59.938 16:36:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:59.938 16:36:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:59.938 16:36:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:59.938 16:36:36 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.938 16:36:36 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:00.198 16:36:36 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.198 16:36:36 -- bdev/bdev_raid.sh@709 -- # killprocess 126082 00:21:00.198 16:36:36 -- common/autotest_common.sh@926 -- # '[' -z 126082 ']' 00:21:00.198 16:36:36 -- common/autotest_common.sh@930 -- # kill -0 126082 00:21:00.198 16:36:36 -- common/autotest_common.sh@931 -- # uname 00:21:00.198 16:36:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:00.198 16:36:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126082 00:21:00.198 16:36:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:00.198 killing process with pid 126082 00:21:00.198 Received shutdown signal, test time was about 60.000000 seconds 00:21:00.198 00:21:00.198 Latency(us) 00:21:00.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.198 =================================================================================================================== 00:21:00.198 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.198 16:36:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:00.198 16:36:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126082' 00:21:00.198 16:36:36 -- common/autotest_common.sh@945 -- # kill 126082 00:21:00.198 16:36:36 -- common/autotest_common.sh@950 -- # wait 126082 00:21:00.198 [2024-07-11 16:36:36.813337] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:00.198 [2024-07-11 16:36:36.813455] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.198 [2024-07-11 16:36:36.813529] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.198 [2024-07-11 16:36:36.813550] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:21:00.198 [2024-07-11 16:36:37.005011] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:01.134 16:36:37 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:01.134 00:21:01.134 real 0m24.398s 00:21:01.134 user 0m35.272s 00:21:01.134 sys 0m3.677s 00:21:01.134 16:36:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.134 16:36:37 -- common/autotest_common.sh@10 -- # set +x 00:21:01.134 ************************************ 00:21:01.134 END TEST raid_rebuild_test_sb 00:21:01.134 ************************************ 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:21:01.393 16:36:37 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:01.393 16:36:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:01.393 16:36:37 -- common/autotest_common.sh@10 -- # set +x 00:21:01.393 ************************************ 00:21:01.393 START TEST raid_rebuild_test_io 00:21:01.393 ************************************ 00:21:01.393 16:36:37 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@544 -- # raid_pid=126741 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@545 -- # waitforlisten 126741 /var/tmp/spdk-raid.sock 00:21:01.393 16:36:37 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:01.393 16:36:37 -- common/autotest_common.sh@819 -- # '[' -z 126741 ']' 00:21:01.393 16:36:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:01.393 16:36:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:01.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:01.393 16:36:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:01.393 16:36:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:01.393 16:36:37 -- common/autotest_common.sh@10 -- # set +x 00:21:01.393 [2024-07-11 16:36:38.038232] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:01.393 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:01.393 Zero copy mechanism will not be used. 00:21:01.393 [2024-07-11 16:36:38.038401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126741 ] 00:21:01.393 [2024-07-11 16:36:38.194988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.652 [2024-07-11 16:36:38.352075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.911 [2024-07-11 16:36:38.518751] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:02.171 16:36:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:02.171 16:36:38 -- common/autotest_common.sh@852 -- # return 0 00:21:02.171 16:36:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:02.171 16:36:38 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:02.171 16:36:38 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:02.430 BaseBdev1 00:21:02.430 16:36:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:02.430 16:36:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:02.430 16:36:39 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:02.689 BaseBdev2 00:21:02.689 16:36:39 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:02.947 spare_malloc 00:21:02.947 16:36:39 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:03.206 spare_delay 00:21:03.206 16:36:39 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:03.206 [2024-07-11 16:36:39.949568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:03.206 [2024-07-11 16:36:39.949666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.206 [2024-07-11 16:36:39.949699] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:03.206 [2024-07-11 16:36:39.949738] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.206 [2024-07-11 16:36:39.951694] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.206 [2024-07-11 16:36:39.951739] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:03.206 spare 00:21:03.206 16:36:39 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:03.464 [2024-07-11 16:36:40.177647] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:03.464 [2024-07-11 16:36:40.179226] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:03.464 [2024-07-11 16:36:40.179305] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:21:03.465 [2024-07-11 16:36:40.179317] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:03.465 [2024-07-11 16:36:40.179434] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:21:03.465 [2024-07-11 16:36:40.179774] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:21:03.465 [2024-07-11 16:36:40.179798] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:21:03.465 [2024-07-11 16:36:40.179953] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.465 16:36:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.723 16:36:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:03.723 "name": "raid_bdev1", 00:21:03.723 "uuid": "d0b78a5d-30f0-4d33-b6e0-76811eae4d93", 00:21:03.723 "strip_size_kb": 0, 00:21:03.723 "state": "online", 00:21:03.723 "raid_level": "raid1", 00:21:03.723 "superblock": false, 00:21:03.723 "num_base_bdevs": 2, 00:21:03.723 "num_base_bdevs_discovered": 2, 00:21:03.723 "num_base_bdevs_operational": 2, 00:21:03.723 "base_bdevs_list": [ 00:21:03.723 { 00:21:03.723 "name": "BaseBdev1", 00:21:03.723 "uuid": "e11255d3-d7ee-4daf-9313-62aacbc943ef", 00:21:03.723 "is_configured": true, 00:21:03.723 "data_offset": 0, 00:21:03.723 "data_size": 65536 00:21:03.723 }, 00:21:03.723 { 00:21:03.723 "name": "BaseBdev2", 00:21:03.723 "uuid": "6d69a831-71eb-4edf-9752-6f54678cbba2", 00:21:03.723 "is_configured": true, 00:21:03.723 "data_offset": 0, 00:21:03.723 "data_size": 65536 00:21:03.723 } 00:21:03.723 ] 00:21:03.723 }' 00:21:03.723 16:36:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:03.723 16:36:40 -- common/autotest_common.sh@10 -- # set +x 00:21:04.289 16:36:40 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:04.290 16:36:40 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:04.548 [2024-07-11 16:36:41.190009] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:04.548 16:36:41 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:04.548 16:36:41 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.548 16:36:41 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:04.806 [2024-07-11 16:36:41.476167] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:04.806 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:04.806 Zero copy mechanism will not be used. 00:21:04.806 Running I/O for 60 seconds... 00:21:04.806 [2024-07-11 16:36:41.556366] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:04.806 [2024-07-11 16:36:41.568107] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.806 16:36:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.074 16:36:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:05.074 "name": "raid_bdev1", 00:21:05.074 "uuid": "d0b78a5d-30f0-4d33-b6e0-76811eae4d93", 00:21:05.074 "strip_size_kb": 0, 00:21:05.074 "state": "online", 00:21:05.074 "raid_level": "raid1", 00:21:05.074 "superblock": false, 00:21:05.074 "num_base_bdevs": 2, 00:21:05.074 "num_base_bdevs_discovered": 1, 00:21:05.074 "num_base_bdevs_operational": 1, 00:21:05.074 "base_bdevs_list": [ 00:21:05.074 { 00:21:05.074 "name": null, 00:21:05.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.074 "is_configured": false, 00:21:05.074 "data_offset": 0, 00:21:05.074 "data_size": 65536 00:21:05.074 }, 00:21:05.074 { 00:21:05.074 "name": "BaseBdev2", 00:21:05.074 "uuid": "6d69a831-71eb-4edf-9752-6f54678cbba2", 00:21:05.074 "is_configured": true, 00:21:05.074 "data_offset": 0, 00:21:05.074 "data_size": 65536 00:21:05.074 } 00:21:05.074 ] 00:21:05.074 }' 00:21:05.074 16:36:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:05.074 16:36:41 -- common/autotest_common.sh@10 -- # set +x 00:21:05.664 16:36:42 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:05.922 [2024-07-11 16:36:42.626564] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:05.922 [2024-07-11 16:36:42.626618] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.922 16:36:42 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:05.922 [2024-07-11 16:36:42.678096] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:05.922 [2024-07-11 16:36:42.679786] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:06.180 [2024-07-11 16:36:42.794051] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:06.180 [2024-07-11 16:36:42.794438] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:06.438 [2024-07-11 16:36:43.021464] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:06.438 [2024-07-11 16:36:43.021656] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:06.696 [2024-07-11 16:36:43.394817] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:06.696 [2024-07-11 16:36:43.395051] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:06.954 [2024-07-11 16:36:43.622312] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:06.954 [2024-07-11 16:36:43.622636] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:06.954 16:36:43 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.954 16:36:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:06.954 16:36:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:06.954 16:36:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:06.954 16:36:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:06.954 16:36:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.954 16:36:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.212 [2024-07-11 16:36:43.838781] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:07.212 16:36:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:07.212 "name": "raid_bdev1", 00:21:07.212 "uuid": "d0b78a5d-30f0-4d33-b6e0-76811eae4d93", 00:21:07.212 "strip_size_kb": 0, 00:21:07.212 "state": "online", 00:21:07.212 "raid_level": "raid1", 00:21:07.212 "superblock": false, 00:21:07.212 "num_base_bdevs": 2, 00:21:07.212 "num_base_bdevs_discovered": 2, 00:21:07.212 "num_base_bdevs_operational": 2, 00:21:07.212 "process": { 00:21:07.212 "type": "rebuild", 00:21:07.212 "target": "spare", 00:21:07.212 "progress": { 00:21:07.212 "blocks": 16384, 00:21:07.212 "percent": 25 00:21:07.212 } 00:21:07.212 }, 00:21:07.212 "base_bdevs_list": [ 00:21:07.212 { 00:21:07.212 "name": "spare", 00:21:07.212 "uuid": "57812d5c-fd75-5d50-bf14-f1e92fd1ab9c", 00:21:07.212 "is_configured": true, 00:21:07.212 "data_offset": 0, 00:21:07.212 "data_size": 65536 00:21:07.212 }, 00:21:07.212 { 00:21:07.212 "name": "BaseBdev2", 00:21:07.212 "uuid": "6d69a831-71eb-4edf-9752-6f54678cbba2", 00:21:07.212 "is_configured": true, 00:21:07.212 "data_offset": 0, 00:21:07.212 "data_size": 65536 00:21:07.212 } 00:21:07.212 ] 00:21:07.212 }' 00:21:07.212 16:36:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:07.212 16:36:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.212 16:36:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:07.212 16:36:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.212 16:36:44 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:07.470 [2024-07-11 16:36:44.086821] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:07.470 [2024-07-11 16:36:44.190608] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.470 [2024-07-11 16:36:44.203378] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:07.470 [2024-07-11 16:36:44.210546] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:07.470 [2024-07-11 16:36:44.212489] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.470 [2024-07-11 16:36:44.250563] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.470 16:36:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.728 16:36:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:07.728 "name": "raid_bdev1", 00:21:07.728 "uuid": "d0b78a5d-30f0-4d33-b6e0-76811eae4d93", 00:21:07.728 "strip_size_kb": 0, 00:21:07.728 "state": "online", 00:21:07.728 "raid_level": "raid1", 00:21:07.728 "superblock": false, 00:21:07.728 "num_base_bdevs": 2, 00:21:07.728 "num_base_bdevs_discovered": 1, 00:21:07.728 "num_base_bdevs_operational": 1, 00:21:07.728 "base_bdevs_list": [ 00:21:07.728 { 00:21:07.728 "name": null, 00:21:07.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.728 "is_configured": false, 00:21:07.728 "data_offset": 0, 00:21:07.728 "data_size": 65536 00:21:07.728 }, 00:21:07.728 { 00:21:07.728 "name": "BaseBdev2", 00:21:07.728 "uuid": "6d69a831-71eb-4edf-9752-6f54678cbba2", 00:21:07.728 "is_configured": true, 00:21:07.728 "data_offset": 0, 00:21:07.728 "data_size": 65536 00:21:07.728 } 00:21:07.728 ] 00:21:07.728 }' 00:21:07.728 16:36:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:07.728 16:36:44 -- common/autotest_common.sh@10 -- # set +x 00:21:08.662 16:36:45 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:08.662 16:36:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:08.662 16:36:45 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:08.662 16:36:45 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:08.662 16:36:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:08.662 16:36:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.662 16:36:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.662 16:36:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:08.662 "name": "raid_bdev1", 00:21:08.662 "uuid": "d0b78a5d-30f0-4d33-b6e0-76811eae4d93", 00:21:08.662 "strip_size_kb": 0, 00:21:08.662 "state": "online", 00:21:08.662 "raid_level": "raid1", 00:21:08.662 "superblock": false, 00:21:08.662 "num_base_bdevs": 2, 00:21:08.662 "num_base_bdevs_discovered": 1, 00:21:08.662 "num_base_bdevs_operational": 1, 00:21:08.662 "base_bdevs_list": [ 00:21:08.662 { 00:21:08.662 "name": null, 00:21:08.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.662 "is_configured": false, 00:21:08.662 "data_offset": 0, 00:21:08.662 "data_size": 65536 00:21:08.662 }, 00:21:08.662 { 00:21:08.662 "name": "BaseBdev2", 00:21:08.662 "uuid": "6d69a831-71eb-4edf-9752-6f54678cbba2", 00:21:08.662 "is_configured": true, 00:21:08.662 "data_offset": 0, 00:21:08.662 "data_size": 65536 00:21:08.662 } 00:21:08.662 ] 00:21:08.662 }' 00:21:08.662 16:36:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:08.920 16:36:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:08.920 16:36:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:08.920 16:36:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:08.920 16:36:45 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:08.920 [2024-07-11 16:36:45.714775] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:08.920 [2024-07-11 16:36:45.714829] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:09.177 16:36:45 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:09.177 [2024-07-11 16:36:45.759453] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:09.177 [2024-07-11 16:36:45.761199] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:09.177 [2024-07-11 16:36:45.868799] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:09.177 [2024-07-11 16:36:45.869310] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:09.435 [2024-07-11 16:36:45.995092] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:09.435 [2024-07-11 16:36:45.995231] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:09.435 [2024-07-11 16:36:46.242328] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:09.693 [2024-07-11 16:36:46.348731] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:09.952 [2024-07-11 16:36:46.581996] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:09.952 16:36:46 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.952 16:36:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:09.952 16:36:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:09.952 16:36:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:09.952 16:36:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:09.952 16:36:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.952 16:36:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.210 [2024-07-11 16:36:46.823382] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:10.210 16:36:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:10.210 "name": "raid_bdev1", 00:21:10.210 "uuid": "d0b78a5d-30f0-4d33-b6e0-76811eae4d93", 00:21:10.210 "strip_size_kb": 0, 00:21:10.210 "state": "online", 00:21:10.210 "raid_level": "raid1", 00:21:10.210 "superblock": false, 00:21:10.210 "num_base_bdevs": 2, 00:21:10.210 "num_base_bdevs_discovered": 2, 00:21:10.210 "num_base_bdevs_operational": 2, 00:21:10.210 "process": { 00:21:10.210 "type": "rebuild", 00:21:10.210 "target": "spare", 00:21:10.210 "progress": { 00:21:10.210 "blocks": 18432, 00:21:10.210 "percent": 28 00:21:10.210 } 00:21:10.210 }, 00:21:10.210 "base_bdevs_list": [ 00:21:10.210 { 00:21:10.210 "name": "spare", 00:21:10.210 "uuid": "57812d5c-fd75-5d50-bf14-f1e92fd1ab9c", 00:21:10.210 "is_configured": true, 00:21:10.210 "data_offset": 0, 00:21:10.210 "data_size": 65536 00:21:10.210 }, 00:21:10.210 { 00:21:10.210 "name": "BaseBdev2", 00:21:10.210 "uuid": "6d69a831-71eb-4edf-9752-6f54678cbba2", 00:21:10.210 "is_configured": true, 00:21:10.211 "data_offset": 0, 00:21:10.211 "data_size": 65536 00:21:10.211 } 00:21:10.211 ] 00:21:10.211 }' 00:21:10.211 16:36:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:10.470 [2024-07-11 16:36:47.044018] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@657 -- # local timeout=423 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.470 16:36:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.470 [2024-07-11 16:36:47.259383] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:10.470 [2024-07-11 16:36:47.259655] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:10.729 16:36:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:10.729 "name": "raid_bdev1", 00:21:10.729 "uuid": "d0b78a5d-30f0-4d33-b6e0-76811eae4d93", 00:21:10.729 "strip_size_kb": 0, 00:21:10.729 "state": "online", 00:21:10.729 "raid_level": "raid1", 00:21:10.729 "superblock": false, 00:21:10.729 "num_base_bdevs": 2, 00:21:10.729 "num_base_bdevs_discovered": 2, 00:21:10.729 "num_base_bdevs_operational": 2, 00:21:10.729 "process": { 00:21:10.729 "type": "rebuild", 00:21:10.729 "target": "spare", 00:21:10.729 "progress": { 00:21:10.729 "blocks": 22528, 00:21:10.729 "percent": 34 00:21:10.729 } 00:21:10.729 }, 00:21:10.729 "base_bdevs_list": [ 00:21:10.729 { 00:21:10.729 "name": "spare", 00:21:10.729 "uuid": "57812d5c-fd75-5d50-bf14-f1e92fd1ab9c", 00:21:10.729 "is_configured": true, 00:21:10.729 "data_offset": 0, 00:21:10.729 "data_size": 65536 00:21:10.729 }, 00:21:10.729 { 00:21:10.729 "name": "BaseBdev2", 00:21:10.729 "uuid": "6d69a831-71eb-4edf-9752-6f54678cbba2", 00:21:10.729 "is_configured": true, 00:21:10.729 "data_offset": 0, 00:21:10.729 "data_size": 65536 00:21:10.729 } 00:21:10.729 ] 00:21:10.729 }' 00:21:10.729 16:36:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:10.729 16:36:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.729 16:36:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:10.729 16:36:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.729 16:36:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:10.988 [2024-07-11 16:36:47.722651] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:11.556 [2024-07-11 16:36:48.294146] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:11.815 [2024-07-11 16:36:48.407918] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:11.815 16:36:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:11.815 16:36:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.815 16:36:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:11.815 16:36:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:11.815 16:36:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:11.815 16:36:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:11.815 16:36:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.815 16:36:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.074 [2024-07-11 16:36:48.641833] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:21:12.074 16:36:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:12.074 "name": "raid_bdev1", 00:21:12.074 "uuid": "d0b78a5d-30f0-4d33-b6e0-76811eae4d93", 00:21:12.074 "strip_size_kb": 0, 00:21:12.074 "state": "online", 00:21:12.074 "raid_level": "raid1", 00:21:12.074 "superblock": false, 00:21:12.074 "num_base_bdevs": 2, 00:21:12.074 "num_base_bdevs_discovered": 2, 00:21:12.074 "num_base_bdevs_operational": 2, 00:21:12.074 "process": { 00:21:12.074 "type": "rebuild", 00:21:12.074 "target": "spare", 00:21:12.074 "progress": { 00:21:12.074 "blocks": 45056, 00:21:12.074 "percent": 68 00:21:12.074 } 00:21:12.074 }, 00:21:12.074 "base_bdevs_list": [ 00:21:12.074 { 00:21:12.074 "name": "spare", 00:21:12.074 "uuid": "57812d5c-fd75-5d50-bf14-f1e92fd1ab9c", 00:21:12.074 "is_configured": true, 00:21:12.074 "data_offset": 0, 00:21:12.074 "data_size": 65536 00:21:12.074 }, 00:21:12.074 { 00:21:12.074 "name": "BaseBdev2", 00:21:12.074 "uuid": "6d69a831-71eb-4edf-9752-6f54678cbba2", 00:21:12.074 "is_configured": true, 00:21:12.074 "data_offset": 0, 00:21:12.074 "data_size": 65536 00:21:12.074 } 00:21:12.074 ] 00:21:12.074 }' 00:21:12.074 16:36:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:12.074 16:36:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.074 16:36:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:12.074 16:36:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.074 16:36:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:12.642 [2024-07-11 16:36:49.178396] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:12.642 [2024-07-11 16:36:49.178637] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:13.209 16:36:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:13.209 16:36:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.209 16:36:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.209 16:36:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:13.209 16:36:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:13.209 16:36:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.209 16:36:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.209 16:36:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.209 [2024-07-11 16:36:49.852669] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:13.209 [2024-07-11 16:36:49.952709] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:13.209 [2024-07-11 16:36:49.954496] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.209 16:36:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:13.209 "name": "raid_bdev1", 00:21:13.209 "uuid": "d0b78a5d-30f0-4d33-b6e0-76811eae4d93", 00:21:13.209 "strip_size_kb": 0, 00:21:13.209 "state": "online", 00:21:13.209 "raid_level": "raid1", 00:21:13.209 "superblock": false, 00:21:13.209 "num_base_bdevs": 2, 00:21:13.209 "num_base_bdevs_discovered": 2, 00:21:13.209 "num_base_bdevs_operational": 2, 00:21:13.209 "base_bdevs_list": [ 00:21:13.209 { 00:21:13.209 "name": "spare", 00:21:13.209 "uuid": "57812d5c-fd75-5d50-bf14-f1e92fd1ab9c", 00:21:13.209 "is_configured": true, 00:21:13.209 "data_offset": 0, 00:21:13.209 "data_size": 65536 00:21:13.210 }, 00:21:13.210 { 00:21:13.210 "name": "BaseBdev2", 00:21:13.210 "uuid": "6d69a831-71eb-4edf-9752-6f54678cbba2", 00:21:13.210 "is_configured": true, 00:21:13.210 "data_offset": 0, 00:21:13.210 "data_size": 65536 00:21:13.210 } 00:21:13.210 ] 00:21:13.210 }' 00:21:13.210 16:36:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:13.468 16:36:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:13.468 16:36:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:13.468 16:36:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:13.468 16:36:50 -- bdev/bdev_raid.sh@660 -- # break 00:21:13.468 16:36:50 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:13.468 16:36:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.468 16:36:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:13.468 16:36:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:13.468 16:36:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.468 16:36:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.468 16:36:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:13.727 "name": "raid_bdev1", 00:21:13.727 "uuid": "d0b78a5d-30f0-4d33-b6e0-76811eae4d93", 00:21:13.727 "strip_size_kb": 0, 00:21:13.727 "state": "online", 00:21:13.727 "raid_level": "raid1", 00:21:13.727 "superblock": false, 00:21:13.727 "num_base_bdevs": 2, 00:21:13.727 "num_base_bdevs_discovered": 2, 00:21:13.727 "num_base_bdevs_operational": 2, 00:21:13.727 "base_bdevs_list": [ 00:21:13.727 { 00:21:13.727 "name": "spare", 00:21:13.727 "uuid": "57812d5c-fd75-5d50-bf14-f1e92fd1ab9c", 00:21:13.727 "is_configured": true, 00:21:13.727 "data_offset": 0, 00:21:13.727 "data_size": 65536 00:21:13.727 }, 00:21:13.727 { 00:21:13.727 "name": "BaseBdev2", 00:21:13.727 "uuid": "6d69a831-71eb-4edf-9752-6f54678cbba2", 00:21:13.727 "is_configured": true, 00:21:13.727 "data_offset": 0, 00:21:13.727 "data_size": 65536 00:21:13.727 } 00:21:13.727 ] 00:21:13.727 }' 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.727 16:36:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.985 16:36:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:13.985 "name": "raid_bdev1", 00:21:13.985 "uuid": "d0b78a5d-30f0-4d33-b6e0-76811eae4d93", 00:21:13.985 "strip_size_kb": 0, 00:21:13.985 "state": "online", 00:21:13.985 "raid_level": "raid1", 00:21:13.985 "superblock": false, 00:21:13.985 "num_base_bdevs": 2, 00:21:13.985 "num_base_bdevs_discovered": 2, 00:21:13.985 "num_base_bdevs_operational": 2, 00:21:13.985 "base_bdevs_list": [ 00:21:13.985 { 00:21:13.985 "name": "spare", 00:21:13.985 "uuid": "57812d5c-fd75-5d50-bf14-f1e92fd1ab9c", 00:21:13.985 "is_configured": true, 00:21:13.985 "data_offset": 0, 00:21:13.985 "data_size": 65536 00:21:13.985 }, 00:21:13.985 { 00:21:13.985 "name": "BaseBdev2", 00:21:13.985 "uuid": "6d69a831-71eb-4edf-9752-6f54678cbba2", 00:21:13.985 "is_configured": true, 00:21:13.985 "data_offset": 0, 00:21:13.985 "data_size": 65536 00:21:13.985 } 00:21:13.985 ] 00:21:13.985 }' 00:21:13.985 16:36:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:13.985 16:36:50 -- common/autotest_common.sh@10 -- # set +x 00:21:14.552 16:36:51 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:14.811 [2024-07-11 16:36:51.609239] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:14.811 [2024-07-11 16:36:51.609276] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.070 00:21:15.070 Latency(us) 00:21:15.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.070 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:15.070 raid_bdev1 : 10.20 123.44 370.33 0.00 0.00 10981.02 301.61 113436.86 00:21:15.070 =================================================================================================================== 00:21:15.070 Total : 123.44 370.33 0.00 0.00 10981.02 301.61 113436.86 00:21:15.070 [2024-07-11 16:36:51.691811] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:15.070 [2024-07-11 16:36:51.691874] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.070 [2024-07-11 16:36:51.691947] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.070 [2024-07-11 16:36:51.691960] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:21:15.070 0 00:21:15.070 16:36:51 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.070 16:36:51 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:15.329 16:36:51 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:15.329 16:36:51 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:15.329 16:36:51 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:15.329 16:36:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.329 16:36:51 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:15.330 16:36:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:15.330 16:36:51 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:15.330 16:36:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:15.330 16:36:51 -- bdev/nbd_common.sh@12 -- # local i 00:21:15.330 16:36:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:15.330 16:36:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.330 16:36:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:15.588 /dev/nbd0 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:15.588 16:36:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:15.588 16:36:52 -- common/autotest_common.sh@857 -- # local i 00:21:15.588 16:36:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:15.588 16:36:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:15.588 16:36:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:15.588 16:36:52 -- common/autotest_common.sh@861 -- # break 00:21:15.588 16:36:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:15.588 16:36:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:15.588 16:36:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.588 1+0 records in 00:21:15.588 1+0 records out 00:21:15.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533194 s, 7.7 MB/s 00:21:15.588 16:36:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.588 16:36:52 -- common/autotest_common.sh@874 -- # size=4096 00:21:15.588 16:36:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.588 16:36:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:15.588 16:36:52 -- common/autotest_common.sh@877 -- # return 0 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.588 16:36:52 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:15.588 16:36:52 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:15.588 16:36:52 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@12 -- # local i 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.588 16:36:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:15.588 /dev/nbd1 00:21:15.847 16:36:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:15.847 16:36:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:15.847 16:36:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:15.847 16:36:52 -- common/autotest_common.sh@857 -- # local i 00:21:15.847 16:36:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:15.847 16:36:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:15.847 16:36:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:15.847 16:36:52 -- common/autotest_common.sh@861 -- # break 00:21:15.847 16:36:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:15.847 16:36:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:15.847 16:36:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:15.847 1+0 records in 00:21:15.847 1+0 records out 00:21:15.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541339 s, 7.6 MB/s 00:21:15.847 16:36:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.847 16:36:52 -- common/autotest_common.sh@874 -- # size=4096 00:21:15.847 16:36:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:15.847 16:36:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:15.847 16:36:52 -- common/autotest_common.sh@877 -- # return 0 00:21:15.847 16:36:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:15.847 16:36:52 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:15.847 16:36:52 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:15.847 16:36:52 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:15.847 16:36:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:15.847 16:36:52 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:15.847 16:36:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:15.847 16:36:52 -- bdev/nbd_common.sh@51 -- # local i 00:21:15.847 16:36:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:15.847 16:36:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:16.105 16:36:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:16.105 16:36:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:16.105 16:36:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:16.105 16:36:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.105 16:36:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.105 16:36:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:16.105 16:36:52 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:16.105 16:36:52 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:16.105 16:36:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.106 16:36:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:16.106 16:36:52 -- bdev/nbd_common.sh@41 -- # break 00:21:16.106 16:36:52 -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.106 16:36:52 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:16.106 16:36:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:16.106 16:36:52 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:16.106 16:36:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:16.106 16:36:52 -- bdev/nbd_common.sh@51 -- # local i 00:21:16.106 16:36:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:16.106 16:36:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:16.365 16:36:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:16.365 16:36:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:16.365 16:36:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:16.365 16:36:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:16.365 16:36:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.365 16:36:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:16.365 16:36:53 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:16.365 16:36:53 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:16.365 16:36:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:16.365 16:36:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:16.623 16:36:53 -- bdev/nbd_common.sh@41 -- # break 00:21:16.623 16:36:53 -- bdev/nbd_common.sh@45 -- # return 0 00:21:16.623 16:36:53 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:16.623 16:36:53 -- bdev/bdev_raid.sh@709 -- # killprocess 126741 00:21:16.623 16:36:53 -- common/autotest_common.sh@926 -- # '[' -z 126741 ']' 00:21:16.623 16:36:53 -- common/autotest_common.sh@930 -- # kill -0 126741 00:21:16.623 16:36:53 -- common/autotest_common.sh@931 -- # uname 00:21:16.623 16:36:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:16.623 16:36:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 126741 00:21:16.623 killing process with pid 126741 00:21:16.623 Received shutdown signal, test time was about 11.711583 seconds 00:21:16.623 00:21:16.623 Latency(us) 00:21:16.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.623 =================================================================================================================== 00:21:16.623 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.623 16:36:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:16.623 16:36:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:16.623 16:36:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 126741' 00:21:16.623 16:36:53 -- common/autotest_common.sh@945 -- # kill 126741 00:21:16.623 16:36:53 -- common/autotest_common.sh@950 -- # wait 126741 00:21:16.623 [2024-07-11 16:36:53.189617] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:16.623 [2024-07-11 16:36:53.338724] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:17.612 ************************************ 00:21:17.612 END TEST raid_rebuild_test_io 00:21:17.612 ************************************ 00:21:17.612 16:36:54 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:17.612 00:21:17.612 real 0m16.329s 00:21:17.612 user 0m25.375s 00:21:17.612 sys 0m1.645s 00:21:17.612 16:36:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:17.612 16:36:54 -- common/autotest_common.sh@10 -- # set +x 00:21:17.612 16:36:54 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:21:17.612 16:36:54 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:17.612 16:36:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:17.612 16:36:54 -- common/autotest_common.sh@10 -- # set +x 00:21:17.612 ************************************ 00:21:17.612 START TEST raid_rebuild_test_sb_io 00:21:17.612 ************************************ 00:21:17.612 16:36:54 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:21:17.612 16:36:54 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:17.612 16:36:54 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:21:17.612 16:36:54 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:17.612 16:36:54 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:17.612 16:36:54 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@544 -- # raid_pid=127233 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@545 -- # waitforlisten 127233 /var/tmp/spdk-raid.sock 00:21:17.613 16:36:54 -- common/autotest_common.sh@819 -- # '[' -z 127233 ']' 00:21:17.613 16:36:54 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:17.613 16:36:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:17.613 16:36:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:17.613 16:36:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:17.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:17.613 16:36:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:17.613 16:36:54 -- common/autotest_common.sh@10 -- # set +x 00:21:17.613 [2024-07-11 16:36:54.415868] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:17.613 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:17.613 Zero copy mechanism will not be used. 00:21:17.613 [2024-07-11 16:36:54.416083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127233 ] 00:21:17.871 [2024-07-11 16:36:54.581948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.130 [2024-07-11 16:36:54.748825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.130 [2024-07-11 16:36:54.917397] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:18.696 16:36:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:18.696 16:36:55 -- common/autotest_common.sh@852 -- # return 0 00:21:18.696 16:36:55 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:18.696 16:36:55 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:18.696 16:36:55 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:18.954 BaseBdev1_malloc 00:21:18.954 16:36:55 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:18.954 [2024-07-11 16:36:55.698616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:18.954 [2024-07-11 16:36:55.698720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.954 [2024-07-11 16:36:55.698752] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:18.954 [2024-07-11 16:36:55.698793] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.954 [2024-07-11 16:36:55.700742] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.954 [2024-07-11 16:36:55.700804] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:18.954 BaseBdev1 00:21:18.954 16:36:55 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:18.954 16:36:55 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:18.954 16:36:55 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:19.212 BaseBdev2_malloc 00:21:19.212 16:36:56 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:19.471 [2024-07-11 16:36:56.187166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:19.471 [2024-07-11 16:36:56.187257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.471 [2024-07-11 16:36:56.187301] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:19.471 [2024-07-11 16:36:56.187346] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.471 [2024-07-11 16:36:56.189509] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.471 [2024-07-11 16:36:56.189558] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:19.471 BaseBdev2 00:21:19.471 16:36:56 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:19.730 spare_malloc 00:21:19.730 16:36:56 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:19.988 spare_delay 00:21:19.988 16:36:56 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:19.988 [2024-07-11 16:36:56.791559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:19.988 [2024-07-11 16:36:56.791641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.988 [2024-07-11 16:36:56.791678] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:19.988 [2024-07-11 16:36:56.791715] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.988 [2024-07-11 16:36:56.793740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.988 [2024-07-11 16:36:56.793797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:19.988 spare 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:21:20.247 [2024-07-11 16:36:56.975666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:20.247 [2024-07-11 16:36:56.977283] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:20.247 [2024-07-11 16:36:56.977462] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:21:20.247 [2024-07-11 16:36:56.977477] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:20.247 [2024-07-11 16:36:56.977633] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:20.247 [2024-07-11 16:36:56.977960] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:21:20.247 [2024-07-11 16:36:56.977992] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:21:20.247 [2024-07-11 16:36:56.978124] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.247 16:36:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.506 16:36:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:20.506 "name": "raid_bdev1", 00:21:20.506 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:20.506 "strip_size_kb": 0, 00:21:20.506 "state": "online", 00:21:20.506 "raid_level": "raid1", 00:21:20.506 "superblock": true, 00:21:20.506 "num_base_bdevs": 2, 00:21:20.506 "num_base_bdevs_discovered": 2, 00:21:20.506 "num_base_bdevs_operational": 2, 00:21:20.506 "base_bdevs_list": [ 00:21:20.506 { 00:21:20.506 "name": "BaseBdev1", 00:21:20.506 "uuid": "fe3d76f9-21b7-5642-8578-468b71df26fc", 00:21:20.506 "is_configured": true, 00:21:20.506 "data_offset": 2048, 00:21:20.506 "data_size": 63488 00:21:20.506 }, 00:21:20.506 { 00:21:20.506 "name": "BaseBdev2", 00:21:20.506 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:20.506 "is_configured": true, 00:21:20.506 "data_offset": 2048, 00:21:20.506 "data_size": 63488 00:21:20.506 } 00:21:20.506 ] 00:21:20.506 }' 00:21:20.506 16:36:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:20.506 16:36:57 -- common/autotest_common.sh@10 -- # set +x 00:21:21.073 16:36:57 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:21.073 16:36:57 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:21.332 [2024-07-11 16:36:57.979973] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:21.332 16:36:57 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:21.332 16:36:57 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.332 16:36:57 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:21.591 16:36:58 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:21.591 16:36:58 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:21.591 16:36:58 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:21.591 16:36:58 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:21.591 [2024-07-11 16:36:58.282621] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:21:21.591 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:21.591 Zero copy mechanism will not be used. 00:21:21.591 Running I/O for 60 seconds... 00:21:21.591 [2024-07-11 16:36:58.394181] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:21.850 [2024-07-11 16:36:58.400177] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.850 16:36:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.108 16:36:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:22.108 "name": "raid_bdev1", 00:21:22.108 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:22.108 "strip_size_kb": 0, 00:21:22.108 "state": "online", 00:21:22.108 "raid_level": "raid1", 00:21:22.108 "superblock": true, 00:21:22.108 "num_base_bdevs": 2, 00:21:22.108 "num_base_bdevs_discovered": 1, 00:21:22.108 "num_base_bdevs_operational": 1, 00:21:22.108 "base_bdevs_list": [ 00:21:22.108 { 00:21:22.108 "name": null, 00:21:22.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.108 "is_configured": false, 00:21:22.108 "data_offset": 2048, 00:21:22.108 "data_size": 63488 00:21:22.108 }, 00:21:22.108 { 00:21:22.108 "name": "BaseBdev2", 00:21:22.108 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:22.108 "is_configured": true, 00:21:22.108 "data_offset": 2048, 00:21:22.108 "data_size": 63488 00:21:22.108 } 00:21:22.108 ] 00:21:22.108 }' 00:21:22.108 16:36:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:22.108 16:36:58 -- common/autotest_common.sh@10 -- # set +x 00:21:22.677 16:36:59 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:22.935 [2024-07-11 16:36:59.527653] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:22.935 [2024-07-11 16:36:59.527732] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:22.935 16:36:59 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:22.935 [2024-07-11 16:36:59.580459] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:22.935 [2024-07-11 16:36:59.582304] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:22.935 [2024-07-11 16:36:59.690651] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:22.935 [2024-07-11 16:36:59.691080] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:23.194 [2024-07-11 16:36:59.892632] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:23.194 [2024-07-11 16:36:59.892771] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:23.453 [2024-07-11 16:37:00.127528] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:23.453 [2024-07-11 16:37:00.127896] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:23.453 [2024-07-11 16:37:00.241663] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:24.020 16:37:00 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.020 16:37:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.020 16:37:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.020 16:37:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.020 16:37:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.020 16:37:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.020 16:37:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.020 [2024-07-11 16:37:00.592335] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:24.020 [2024-07-11 16:37:00.592748] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:24.020 [2024-07-11 16:37:00.801463] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:24.020 [2024-07-11 16:37:00.801751] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:24.279 16:37:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.279 "name": "raid_bdev1", 00:21:24.279 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:24.279 "strip_size_kb": 0, 00:21:24.279 "state": "online", 00:21:24.279 "raid_level": "raid1", 00:21:24.279 "superblock": true, 00:21:24.279 "num_base_bdevs": 2, 00:21:24.279 "num_base_bdevs_discovered": 2, 00:21:24.279 "num_base_bdevs_operational": 2, 00:21:24.279 "process": { 00:21:24.279 "type": "rebuild", 00:21:24.279 "target": "spare", 00:21:24.279 "progress": { 00:21:24.279 "blocks": 16384, 00:21:24.279 "percent": 25 00:21:24.279 } 00:21:24.279 }, 00:21:24.279 "base_bdevs_list": [ 00:21:24.280 { 00:21:24.280 "name": "spare", 00:21:24.280 "uuid": "92346ff9-a3b2-5c5d-9993-a1878d038c5a", 00:21:24.280 "is_configured": true, 00:21:24.280 "data_offset": 2048, 00:21:24.280 "data_size": 63488 00:21:24.280 }, 00:21:24.280 { 00:21:24.280 "name": "BaseBdev2", 00:21:24.280 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:24.280 "is_configured": true, 00:21:24.280 "data_offset": 2048, 00:21:24.280 "data_size": 63488 00:21:24.280 } 00:21:24.280 ] 00:21:24.280 }' 00:21:24.280 16:37:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.280 16:37:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.280 16:37:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.280 16:37:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.280 16:37:00 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:24.538 [2024-07-11 16:37:01.138571] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:24.538 [2024-07-11 16:37:01.167574] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:24.538 [2024-07-11 16:37:01.251462] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:24.797 [2024-07-11 16:37:01.352213] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:24.797 [2024-07-11 16:37:01.366058] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:24.797 [2024-07-11 16:37:01.391454] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.797 16:37:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.056 16:37:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:25.056 "name": "raid_bdev1", 00:21:25.056 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:25.056 "strip_size_kb": 0, 00:21:25.056 "state": "online", 00:21:25.056 "raid_level": "raid1", 00:21:25.056 "superblock": true, 00:21:25.056 "num_base_bdevs": 2, 00:21:25.056 "num_base_bdevs_discovered": 1, 00:21:25.056 "num_base_bdevs_operational": 1, 00:21:25.056 "base_bdevs_list": [ 00:21:25.056 { 00:21:25.056 "name": null, 00:21:25.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.056 "is_configured": false, 00:21:25.056 "data_offset": 2048, 00:21:25.056 "data_size": 63488 00:21:25.056 }, 00:21:25.056 { 00:21:25.056 "name": "BaseBdev2", 00:21:25.056 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:25.056 "is_configured": true, 00:21:25.056 "data_offset": 2048, 00:21:25.056 "data_size": 63488 00:21:25.056 } 00:21:25.056 ] 00:21:25.056 }' 00:21:25.056 16:37:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:25.056 16:37:01 -- common/autotest_common.sh@10 -- # set +x 00:21:25.623 16:37:02 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:25.623 16:37:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:25.623 16:37:02 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:25.623 16:37:02 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:25.624 16:37:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:25.624 16:37:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.624 16:37:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.881 16:37:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:25.881 "name": "raid_bdev1", 00:21:25.881 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:25.881 "strip_size_kb": 0, 00:21:25.881 "state": "online", 00:21:25.881 "raid_level": "raid1", 00:21:25.881 "superblock": true, 00:21:25.881 "num_base_bdevs": 2, 00:21:25.881 "num_base_bdevs_discovered": 1, 00:21:25.881 "num_base_bdevs_operational": 1, 00:21:25.881 "base_bdevs_list": [ 00:21:25.881 { 00:21:25.881 "name": null, 00:21:25.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.881 "is_configured": false, 00:21:25.881 "data_offset": 2048, 00:21:25.881 "data_size": 63488 00:21:25.881 }, 00:21:25.881 { 00:21:25.881 "name": "BaseBdev2", 00:21:25.881 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:25.881 "is_configured": true, 00:21:25.881 "data_offset": 2048, 00:21:25.881 "data_size": 63488 00:21:25.881 } 00:21:25.881 ] 00:21:25.881 }' 00:21:25.881 16:37:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:25.881 16:37:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:25.881 16:37:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.138 16:37:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:26.138 16:37:02 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:26.138 [2024-07-11 16:37:02.931048] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:26.138 [2024-07-11 16:37:02.931114] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.396 16:37:02 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:26.396 [2024-07-11 16:37:02.973312] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:26.396 [2024-07-11 16:37:02.975280] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:26.396 [2024-07-11 16:37:03.102845] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:26.396 [2024-07-11 16:37:03.103197] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:26.654 [2024-07-11 16:37:03.335826] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:26.654 [2024-07-11 16:37:03.336024] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:26.913 [2024-07-11 16:37:03.652649] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:27.170 16:37:03 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.170 16:37:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.170 [2024-07-11 16:37:03.975097] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:27.170 16:37:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:27.170 16:37:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:27.170 16:37:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.170 16:37:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.170 16:37:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.427 [2024-07-11 16:37:04.110185] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:27.427 16:37:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:27.427 "name": "raid_bdev1", 00:21:27.427 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:27.427 "strip_size_kb": 0, 00:21:27.427 "state": "online", 00:21:27.427 "raid_level": "raid1", 00:21:27.427 "superblock": true, 00:21:27.427 "num_base_bdevs": 2, 00:21:27.427 "num_base_bdevs_discovered": 2, 00:21:27.427 "num_base_bdevs_operational": 2, 00:21:27.427 "process": { 00:21:27.427 "type": "rebuild", 00:21:27.427 "target": "spare", 00:21:27.427 "progress": { 00:21:27.427 "blocks": 16384, 00:21:27.427 "percent": 25 00:21:27.427 } 00:21:27.427 }, 00:21:27.427 "base_bdevs_list": [ 00:21:27.427 { 00:21:27.427 "name": "spare", 00:21:27.427 "uuid": "92346ff9-a3b2-5c5d-9993-a1878d038c5a", 00:21:27.427 "is_configured": true, 00:21:27.427 "data_offset": 2048, 00:21:27.427 "data_size": 63488 00:21:27.427 }, 00:21:27.427 { 00:21:27.427 "name": "BaseBdev2", 00:21:27.427 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:27.427 "is_configured": true, 00:21:27.427 "data_offset": 2048, 00:21:27.427 "data_size": 63488 00:21:27.427 } 00:21:27.427 ] 00:21:27.427 }' 00:21:27.427 16:37:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:27.686 [2024-07-11 16:37:04.335573] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:27.686 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@657 -- # local timeout=440 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.686 16:37:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.686 [2024-07-11 16:37:04.438319] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:27.686 [2024-07-11 16:37:04.438630] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:27.944 16:37:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:27.944 "name": "raid_bdev1", 00:21:27.944 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:27.944 "strip_size_kb": 0, 00:21:27.944 "state": "online", 00:21:27.944 "raid_level": "raid1", 00:21:27.944 "superblock": true, 00:21:27.944 "num_base_bdevs": 2, 00:21:27.944 "num_base_bdevs_discovered": 2, 00:21:27.944 "num_base_bdevs_operational": 2, 00:21:27.944 "process": { 00:21:27.944 "type": "rebuild", 00:21:27.944 "target": "spare", 00:21:27.944 "progress": { 00:21:27.944 "blocks": 22528, 00:21:27.944 "percent": 35 00:21:27.944 } 00:21:27.944 }, 00:21:27.944 "base_bdevs_list": [ 00:21:27.944 { 00:21:27.944 "name": "spare", 00:21:27.944 "uuid": "92346ff9-a3b2-5c5d-9993-a1878d038c5a", 00:21:27.944 "is_configured": true, 00:21:27.944 "data_offset": 2048, 00:21:27.944 "data_size": 63488 00:21:27.944 }, 00:21:27.944 { 00:21:27.944 "name": "BaseBdev2", 00:21:27.944 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:27.944 "is_configured": true, 00:21:27.944 "data_offset": 2048, 00:21:27.944 "data_size": 63488 00:21:27.944 } 00:21:27.944 ] 00:21:27.944 }' 00:21:27.944 16:37:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:27.944 16:37:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.944 16:37:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:27.944 16:37:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.944 16:37:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:28.202 [2024-07-11 16:37:04.774081] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:28.202 [2024-07-11 16:37:04.774548] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:28.202 [2024-07-11 16:37:04.895367] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:28.461 [2024-07-11 16:37:05.128976] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:28.719 [2024-07-11 16:37:05.350820] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:28.993 16:37:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:28.993 16:37:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.993 16:37:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:28.993 16:37:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:28.993 16:37:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:28.993 16:37:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:28.993 16:37:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.993 16:37:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.263 16:37:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.263 "name": "raid_bdev1", 00:21:29.263 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:29.263 "strip_size_kb": 0, 00:21:29.263 "state": "online", 00:21:29.263 "raid_level": "raid1", 00:21:29.263 "superblock": true, 00:21:29.263 "num_base_bdevs": 2, 00:21:29.263 "num_base_bdevs_discovered": 2, 00:21:29.263 "num_base_bdevs_operational": 2, 00:21:29.263 "process": { 00:21:29.263 "type": "rebuild", 00:21:29.263 "target": "spare", 00:21:29.263 "progress": { 00:21:29.263 "blocks": 43008, 00:21:29.263 "percent": 67 00:21:29.263 } 00:21:29.263 }, 00:21:29.263 "base_bdevs_list": [ 00:21:29.263 { 00:21:29.263 "name": "spare", 00:21:29.263 "uuid": "92346ff9-a3b2-5c5d-9993-a1878d038c5a", 00:21:29.263 "is_configured": true, 00:21:29.263 "data_offset": 2048, 00:21:29.263 "data_size": 63488 00:21:29.263 }, 00:21:29.263 { 00:21:29.263 "name": "BaseBdev2", 00:21:29.263 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:29.263 "is_configured": true, 00:21:29.263 "data_offset": 2048, 00:21:29.263 "data_size": 63488 00:21:29.263 } 00:21:29.263 ] 00:21:29.263 }' 00:21:29.263 16:37:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.263 16:37:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.263 16:37:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.263 16:37:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.263 16:37:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:29.263 [2024-07-11 16:37:06.030883] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:21:29.831 [2024-07-11 16:37:06.451865] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:29.831 [2024-07-11 16:37:06.452121] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:30.090 [2024-07-11 16:37:06.888724] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:30.349 16:37:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:30.349 16:37:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:30.349 16:37:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:30.349 16:37:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:30.349 16:37:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:30.349 16:37:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:30.349 16:37:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.349 16:37:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.349 [2024-07-11 16:37:07.117228] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:30.608 [2024-07-11 16:37:07.217262] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:30.608 [2024-07-11 16:37:07.225468] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.608 16:37:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.608 "name": "raid_bdev1", 00:21:30.608 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:30.608 "strip_size_kb": 0, 00:21:30.608 "state": "online", 00:21:30.609 "raid_level": "raid1", 00:21:30.609 "superblock": true, 00:21:30.609 "num_base_bdevs": 2, 00:21:30.609 "num_base_bdevs_discovered": 2, 00:21:30.609 "num_base_bdevs_operational": 2, 00:21:30.609 "base_bdevs_list": [ 00:21:30.609 { 00:21:30.609 "name": "spare", 00:21:30.609 "uuid": "92346ff9-a3b2-5c5d-9993-a1878d038c5a", 00:21:30.609 "is_configured": true, 00:21:30.609 "data_offset": 2048, 00:21:30.609 "data_size": 63488 00:21:30.609 }, 00:21:30.609 { 00:21:30.609 "name": "BaseBdev2", 00:21:30.609 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:30.609 "is_configured": true, 00:21:30.609 "data_offset": 2048, 00:21:30.609 "data_size": 63488 00:21:30.609 } 00:21:30.609 ] 00:21:30.609 }' 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@660 -- # break 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.609 16:37:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.868 16:37:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.868 "name": "raid_bdev1", 00:21:30.868 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:30.868 "strip_size_kb": 0, 00:21:30.868 "state": "online", 00:21:30.868 "raid_level": "raid1", 00:21:30.868 "superblock": true, 00:21:30.868 "num_base_bdevs": 2, 00:21:30.868 "num_base_bdevs_discovered": 2, 00:21:30.868 "num_base_bdevs_operational": 2, 00:21:30.868 "base_bdevs_list": [ 00:21:30.868 { 00:21:30.868 "name": "spare", 00:21:30.868 "uuid": "92346ff9-a3b2-5c5d-9993-a1878d038c5a", 00:21:30.868 "is_configured": true, 00:21:30.868 "data_offset": 2048, 00:21:30.868 "data_size": 63488 00:21:30.868 }, 00:21:30.868 { 00:21:30.868 "name": "BaseBdev2", 00:21:30.868 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:30.868 "is_configured": true, 00:21:30.868 "data_offset": 2048, 00:21:30.868 "data_size": 63488 00:21:30.868 } 00:21:30.868 ] 00:21:30.868 }' 00:21:30.868 16:37:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.127 16:37:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.386 16:37:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:31.386 "name": "raid_bdev1", 00:21:31.386 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:31.386 "strip_size_kb": 0, 00:21:31.386 "state": "online", 00:21:31.386 "raid_level": "raid1", 00:21:31.386 "superblock": true, 00:21:31.386 "num_base_bdevs": 2, 00:21:31.386 "num_base_bdevs_discovered": 2, 00:21:31.386 "num_base_bdevs_operational": 2, 00:21:31.386 "base_bdevs_list": [ 00:21:31.386 { 00:21:31.386 "name": "spare", 00:21:31.386 "uuid": "92346ff9-a3b2-5c5d-9993-a1878d038c5a", 00:21:31.386 "is_configured": true, 00:21:31.386 "data_offset": 2048, 00:21:31.386 "data_size": 63488 00:21:31.386 }, 00:21:31.386 { 00:21:31.386 "name": "BaseBdev2", 00:21:31.386 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:31.386 "is_configured": true, 00:21:31.386 "data_offset": 2048, 00:21:31.386 "data_size": 63488 00:21:31.386 } 00:21:31.386 ] 00:21:31.386 }' 00:21:31.386 16:37:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:31.386 16:37:07 -- common/autotest_common.sh@10 -- # set +x 00:21:31.954 16:37:08 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:32.213 [2024-07-11 16:37:08.783714] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:32.213 [2024-07-11 16:37:08.783751] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:32.213 00:21:32.213 Latency(us) 00:21:32.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.213 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:32.213 raid_bdev1 : 10.57 117.39 352.18 0.00 0.00 11396.32 303.48 113436.86 00:21:32.213 =================================================================================================================== 00:21:32.213 Total : 117.39 352.18 0.00 0.00 11396.32 303.48 113436.86 00:21:32.213 [2024-07-11 16:37:08.870417] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:32.213 [2024-07-11 16:37:08.870469] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:32.213 0 00:21:32.213 [2024-07-11 16:37:08.870545] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:32.213 [2024-07-11 16:37:08.870557] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:32.213 16:37:08 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.213 16:37:08 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:32.472 16:37:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:32.472 16:37:09 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:32.472 16:37:09 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:32.472 16:37:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:32.472 16:37:09 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:32.472 16:37:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:32.472 16:37:09 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:32.472 16:37:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:32.472 16:37:09 -- bdev/nbd_common.sh@12 -- # local i 00:21:32.472 16:37:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:32.472 16:37:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:32.472 16:37:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:32.731 /dev/nbd0 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:32.731 16:37:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:32.731 16:37:09 -- common/autotest_common.sh@857 -- # local i 00:21:32.731 16:37:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:32.731 16:37:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:32.731 16:37:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:32.731 16:37:09 -- common/autotest_common.sh@861 -- # break 00:21:32.731 16:37:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:32.731 16:37:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:32.731 16:37:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:32.731 1+0 records in 00:21:32.731 1+0 records out 00:21:32.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413477 s, 9.9 MB/s 00:21:32.731 16:37:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:32.731 16:37:09 -- common/autotest_common.sh@874 -- # size=4096 00:21:32.731 16:37:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:32.731 16:37:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:32.731 16:37:09 -- common/autotest_common.sh@877 -- # return 0 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:32.731 16:37:09 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:32.731 16:37:09 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:32.731 16:37:09 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@12 -- # local i 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:32.731 16:37:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:32.989 /dev/nbd1 00:21:32.989 16:37:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:32.989 16:37:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:32.989 16:37:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:32.989 16:37:09 -- common/autotest_common.sh@857 -- # local i 00:21:32.989 16:37:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:32.989 16:37:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:32.989 16:37:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:32.989 16:37:09 -- common/autotest_common.sh@861 -- # break 00:21:32.989 16:37:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:32.989 16:37:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:32.989 16:37:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:32.989 1+0 records in 00:21:32.989 1+0 records out 00:21:32.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346408 s, 11.8 MB/s 00:21:32.989 16:37:09 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:32.989 16:37:09 -- common/autotest_common.sh@874 -- # size=4096 00:21:32.989 16:37:09 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:32.989 16:37:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:32.989 16:37:09 -- common/autotest_common.sh@877 -- # return 0 00:21:32.989 16:37:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:32.989 16:37:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:32.989 16:37:09 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:32.989 16:37:09 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:32.989 16:37:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:32.989 16:37:09 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:32.989 16:37:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:32.989 16:37:09 -- bdev/nbd_common.sh@51 -- # local i 00:21:32.989 16:37:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:32.989 16:37:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:33.248 16:37:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:33.248 16:37:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:33.248 16:37:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:33.248 16:37:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:33.248 16:37:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:33.248 16:37:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:33.248 16:37:09 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:33.248 16:37:10 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:33.248 16:37:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:33.248 16:37:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:33.248 16:37:10 -- bdev/nbd_common.sh@41 -- # break 00:21:33.248 16:37:10 -- bdev/nbd_common.sh@45 -- # return 0 00:21:33.248 16:37:10 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:33.248 16:37:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:33.248 16:37:10 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:33.248 16:37:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:33.248 16:37:10 -- bdev/nbd_common.sh@51 -- # local i 00:21:33.248 16:37:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:33.248 16:37:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:33.508 16:37:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:33.508 16:37:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:33.508 16:37:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:33.508 16:37:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:33.508 16:37:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:33.508 16:37:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:33.508 16:37:10 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:33.767 16:37:10 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:33.767 16:37:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:33.767 16:37:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:33.767 16:37:10 -- bdev/nbd_common.sh@41 -- # break 00:21:33.767 16:37:10 -- bdev/nbd_common.sh@45 -- # return 0 00:21:33.767 16:37:10 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:33.767 16:37:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:33.767 16:37:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:33.767 16:37:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:34.025 16:37:10 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:34.284 [2024-07-11 16:37:10.864302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:34.284 [2024-07-11 16:37:10.864392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.284 [2024-07-11 16:37:10.864425] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:34.284 [2024-07-11 16:37:10.864450] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.284 [2024-07-11 16:37:10.866495] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.284 [2024-07-11 16:37:10.866569] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:34.284 [2024-07-11 16:37:10.866687] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:34.284 [2024-07-11 16:37:10.866757] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.284 BaseBdev1 00:21:34.284 16:37:10 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:34.284 16:37:10 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:34.284 16:37:10 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:34.284 16:37:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:34.542 [2024-07-11 16:37:11.233548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:34.542 [2024-07-11 16:37:11.233633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.542 [2024-07-11 16:37:11.233666] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:34.542 [2024-07-11 16:37:11.233690] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.542 [2024-07-11 16:37:11.234156] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.542 [2024-07-11 16:37:11.234234] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:34.542 [2024-07-11 16:37:11.234399] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:34.542 [2024-07-11 16:37:11.234414] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:34.542 [2024-07-11 16:37:11.234422] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.542 [2024-07-11 16:37:11.234439] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:21:34.542 [2024-07-11 16:37:11.234510] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:34.542 BaseBdev2 00:21:34.542 16:37:11 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:34.802 16:37:11 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:34.802 [2024-07-11 16:37:11.597644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:34.802 [2024-07-11 16:37:11.597715] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.802 [2024-07-11 16:37:11.597748] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:34.802 [2024-07-11 16:37:11.597767] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.802 [2024-07-11 16:37:11.598231] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.802 [2024-07-11 16:37:11.598287] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:34.802 [2024-07-11 16:37:11.598418] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:34.802 [2024-07-11 16:37:11.598457] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:34.802 spare 00:21:35.061 16:37:11 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:35.061 16:37:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:35.061 16:37:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:35.061 16:37:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:35.061 16:37:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:35.062 16:37:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:35.062 16:37:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:35.062 16:37:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:35.062 16:37:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:35.062 16:37:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:35.062 16:37:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.062 16:37:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.062 [2024-07-11 16:37:11.698557] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:21:35.062 [2024-07-11 16:37:11.698585] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:35.062 [2024-07-11 16:37:11.698737] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cee0 00:21:35.062 [2024-07-11 16:37:11.699078] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:21:35.062 [2024-07-11 16:37:11.699103] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:21:35.062 [2024-07-11 16:37:11.699268] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.062 16:37:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.062 "name": "raid_bdev1", 00:21:35.062 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:35.062 "strip_size_kb": 0, 00:21:35.062 "state": "online", 00:21:35.062 "raid_level": "raid1", 00:21:35.062 "superblock": true, 00:21:35.062 "num_base_bdevs": 2, 00:21:35.062 "num_base_bdevs_discovered": 2, 00:21:35.062 "num_base_bdevs_operational": 2, 00:21:35.062 "base_bdevs_list": [ 00:21:35.062 { 00:21:35.062 "name": "spare", 00:21:35.062 "uuid": "92346ff9-a3b2-5c5d-9993-a1878d038c5a", 00:21:35.062 "is_configured": true, 00:21:35.062 "data_offset": 2048, 00:21:35.062 "data_size": 63488 00:21:35.062 }, 00:21:35.062 { 00:21:35.062 "name": "BaseBdev2", 00:21:35.062 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:35.062 "is_configured": true, 00:21:35.062 "data_offset": 2048, 00:21:35.062 "data_size": 63488 00:21:35.062 } 00:21:35.062 ] 00:21:35.062 }' 00:21:35.062 16:37:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.062 16:37:11 -- common/autotest_common.sh@10 -- # set +x 00:21:35.630 16:37:12 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:35.630 16:37:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:35.630 16:37:12 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:35.630 16:37:12 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:35.630 16:37:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:35.630 16:37:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.630 16:37:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.889 16:37:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:35.889 "name": "raid_bdev1", 00:21:35.889 "uuid": "1f671bdb-e499-4889-bf5f-8fefcb7dd948", 00:21:35.889 "strip_size_kb": 0, 00:21:35.889 "state": "online", 00:21:35.889 "raid_level": "raid1", 00:21:35.889 "superblock": true, 00:21:35.889 "num_base_bdevs": 2, 00:21:35.889 "num_base_bdevs_discovered": 2, 00:21:35.889 "num_base_bdevs_operational": 2, 00:21:35.889 "base_bdevs_list": [ 00:21:35.889 { 00:21:35.889 "name": "spare", 00:21:35.889 "uuid": "92346ff9-a3b2-5c5d-9993-a1878d038c5a", 00:21:35.889 "is_configured": true, 00:21:35.889 "data_offset": 2048, 00:21:35.889 "data_size": 63488 00:21:35.889 }, 00:21:35.889 { 00:21:35.889 "name": "BaseBdev2", 00:21:35.889 "uuid": "1751a4c3-7938-56df-a43d-9aa3a99a0285", 00:21:35.889 "is_configured": true, 00:21:35.889 "data_offset": 2048, 00:21:35.889 "data_size": 63488 00:21:35.889 } 00:21:35.889 ] 00:21:35.889 }' 00:21:35.889 16:37:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:35.889 16:37:12 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:35.889 16:37:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:35.889 16:37:12 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:35.889 16:37:12 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.889 16:37:12 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:36.148 16:37:12 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:36.148 16:37:12 -- bdev/bdev_raid.sh@709 -- # killprocess 127233 00:21:36.148 16:37:12 -- common/autotest_common.sh@926 -- # '[' -z 127233 ']' 00:21:36.148 16:37:12 -- common/autotest_common.sh@930 -- # kill -0 127233 00:21:36.148 16:37:12 -- common/autotest_common.sh@931 -- # uname 00:21:36.148 16:37:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:36.148 16:37:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127233 00:21:36.148 killing process with pid 127233 00:21:36.148 Received shutdown signal, test time was about 14.570339 seconds 00:21:36.148 00:21:36.148 Latency(us) 00:21:36.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.148 =================================================================================================================== 00:21:36.148 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:36.148 16:37:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:36.148 16:37:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:36.148 16:37:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127233' 00:21:36.148 16:37:12 -- common/autotest_common.sh@945 -- # kill 127233 00:21:36.148 16:37:12 -- common/autotest_common.sh@950 -- # wait 127233 00:21:36.148 [2024-07-11 16:37:12.854779] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:36.148 [2024-07-11 16:37:12.854890] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:36.148 [2024-07-11 16:37:12.854956] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:36.148 [2024-07-11 16:37:12.854966] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:21:36.407 [2024-07-11 16:37:13.003430] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:37.340 ************************************ 00:21:37.340 END TEST raid_rebuild_test_sb_io 00:21:37.340 ************************************ 00:21:37.340 16:37:13 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:37.340 00:21:37.340 real 0m19.606s 00:21:37.340 user 0m31.515s 00:21:37.340 sys 0m2.069s 00:21:37.340 16:37:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:37.340 16:37:13 -- common/autotest_common.sh@10 -- # set +x 00:21:37.340 16:37:13 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:21:37.340 16:37:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:37.340 16:37:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:37.340 16:37:14 -- common/autotest_common.sh@10 -- # set +x 00:21:37.340 ************************************ 00:21:37.340 START TEST raid_rebuild_test 00:21:37.340 ************************************ 00:21:37.340 16:37:14 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@544 -- # raid_pid=127817 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@545 -- # waitforlisten 127817 /var/tmp/spdk-raid.sock 00:21:37.340 16:37:14 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:37.340 16:37:14 -- common/autotest_common.sh@819 -- # '[' -z 127817 ']' 00:21:37.340 16:37:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:37.340 16:37:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:37.340 16:37:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:37.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:37.340 16:37:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:37.340 16:37:14 -- common/autotest_common.sh@10 -- # set +x 00:21:37.340 [2024-07-11 16:37:14.066473] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:37.340 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:37.340 Zero copy mechanism will not be used. 00:21:37.340 [2024-07-11 16:37:14.066653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127817 ] 00:21:37.599 [2024-07-11 16:37:14.214209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.599 [2024-07-11 16:37:14.378184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.857 [2024-07-11 16:37:14.547565] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:38.423 16:37:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:38.423 16:37:15 -- common/autotest_common.sh@852 -- # return 0 00:21:38.423 16:37:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:38.423 16:37:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:38.423 16:37:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:38.423 BaseBdev1 00:21:38.682 16:37:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:38.682 16:37:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:38.682 16:37:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:38.682 BaseBdev2 00:21:38.682 16:37:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:38.682 16:37:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:38.682 16:37:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:38.940 BaseBdev3 00:21:38.940 16:37:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:38.940 16:37:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:38.940 16:37:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:39.198 BaseBdev4 00:21:39.198 16:37:15 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:39.456 spare_malloc 00:21:39.456 16:37:16 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:39.714 spare_delay 00:21:39.714 16:37:16 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:39.714 [2024-07-11 16:37:16.501843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:39.714 [2024-07-11 16:37:16.501936] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.714 [2024-07-11 16:37:16.501967] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:39.714 [2024-07-11 16:37:16.502022] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.714 [2024-07-11 16:37:16.503990] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.714 [2024-07-11 16:37:16.504037] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:39.714 spare 00:21:39.714 16:37:16 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:39.973 [2024-07-11 16:37:16.685908] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:39.973 [2024-07-11 16:37:16.687441] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:39.973 [2024-07-11 16:37:16.687493] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:39.973 [2024-07-11 16:37:16.687528] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:39.973 [2024-07-11 16:37:16.687595] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:21:39.973 [2024-07-11 16:37:16.687606] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:39.973 [2024-07-11 16:37:16.687765] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:39.973 [2024-07-11 16:37:16.688084] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:21:39.973 [2024-07-11 16:37:16.688109] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:21:39.973 [2024-07-11 16:37:16.688261] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.973 16:37:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.230 16:37:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:40.230 "name": "raid_bdev1", 00:21:40.230 "uuid": "f6d4dc94-4da7-42a0-8e60-55afd8329c08", 00:21:40.230 "strip_size_kb": 0, 00:21:40.230 "state": "online", 00:21:40.230 "raid_level": "raid1", 00:21:40.230 "superblock": false, 00:21:40.230 "num_base_bdevs": 4, 00:21:40.230 "num_base_bdevs_discovered": 4, 00:21:40.230 "num_base_bdevs_operational": 4, 00:21:40.230 "base_bdevs_list": [ 00:21:40.230 { 00:21:40.230 "name": "BaseBdev1", 00:21:40.230 "uuid": "f0c38017-b55a-44ed-98bd-37906c810bc0", 00:21:40.230 "is_configured": true, 00:21:40.230 "data_offset": 0, 00:21:40.230 "data_size": 65536 00:21:40.230 }, 00:21:40.230 { 00:21:40.230 "name": "BaseBdev2", 00:21:40.230 "uuid": "67732f0d-c8c3-4fc3-8791-85452918f3de", 00:21:40.230 "is_configured": true, 00:21:40.230 "data_offset": 0, 00:21:40.230 "data_size": 65536 00:21:40.230 }, 00:21:40.230 { 00:21:40.230 "name": "BaseBdev3", 00:21:40.230 "uuid": "911ba344-5574-4885-b492-df29c5b969af", 00:21:40.230 "is_configured": true, 00:21:40.230 "data_offset": 0, 00:21:40.230 "data_size": 65536 00:21:40.230 }, 00:21:40.230 { 00:21:40.230 "name": "BaseBdev4", 00:21:40.230 "uuid": "cf62562b-4fbc-4dc7-b9e9-ebb1065e329a", 00:21:40.230 "is_configured": true, 00:21:40.230 "data_offset": 0, 00:21:40.230 "data_size": 65536 00:21:40.230 } 00:21:40.230 ] 00:21:40.230 }' 00:21:40.230 16:37:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:40.230 16:37:16 -- common/autotest_common.sh@10 -- # set +x 00:21:40.819 16:37:17 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:40.819 16:37:17 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:41.076 [2024-07-11 16:37:17.710293] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.076 16:37:17 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:41.076 16:37:17 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:41.076 16:37:17 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:41.334 16:37:17 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:41.334 16:37:17 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:41.334 16:37:17 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:41.334 16:37:17 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:41.334 16:37:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:41.334 16:37:17 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:41.334 16:37:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:41.334 16:37:17 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:41.334 16:37:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:41.334 16:37:17 -- bdev/nbd_common.sh@12 -- # local i 00:21:41.334 16:37:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:41.334 16:37:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.334 16:37:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:41.334 [2024-07-11 16:37:18.138188] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:41.592 /dev/nbd0 00:21:41.592 16:37:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:41.592 16:37:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:41.592 16:37:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:41.592 16:37:18 -- common/autotest_common.sh@857 -- # local i 00:21:41.592 16:37:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:41.592 16:37:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:41.592 16:37:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:41.592 16:37:18 -- common/autotest_common.sh@861 -- # break 00:21:41.592 16:37:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:41.592 16:37:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:41.592 16:37:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:41.592 1+0 records in 00:21:41.592 1+0 records out 00:21:41.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309689 s, 13.2 MB/s 00:21:41.592 16:37:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.592 16:37:18 -- common/autotest_common.sh@874 -- # size=4096 00:21:41.592 16:37:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:41.592 16:37:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:41.592 16:37:18 -- common/autotest_common.sh@877 -- # return 0 00:21:41.592 16:37:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:41.592 16:37:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:41.592 16:37:18 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:41.592 16:37:18 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:41.592 16:37:18 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:46.859 65536+0 records in 00:21:46.859 65536+0 records out 00:21:46.859 33554432 bytes (34 MB, 32 MiB) copied, 4.97054 s, 6.8 MB/s 00:21:46.859 16:37:23 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@51 -- # local i 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:46.859 [2024-07-11 16:37:23.423465] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@41 -- # break 00:21:46.859 16:37:23 -- bdev/nbd_common.sh@45 -- # return 0 00:21:46.859 16:37:23 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:47.116 [2024-07-11 16:37:23.759193] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.116 16:37:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.373 16:37:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:47.373 "name": "raid_bdev1", 00:21:47.373 "uuid": "f6d4dc94-4da7-42a0-8e60-55afd8329c08", 00:21:47.373 "strip_size_kb": 0, 00:21:47.373 "state": "online", 00:21:47.373 "raid_level": "raid1", 00:21:47.373 "superblock": false, 00:21:47.373 "num_base_bdevs": 4, 00:21:47.373 "num_base_bdevs_discovered": 3, 00:21:47.373 "num_base_bdevs_operational": 3, 00:21:47.373 "base_bdevs_list": [ 00:21:47.373 { 00:21:47.373 "name": null, 00:21:47.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.373 "is_configured": false, 00:21:47.373 "data_offset": 0, 00:21:47.373 "data_size": 65536 00:21:47.373 }, 00:21:47.373 { 00:21:47.373 "name": "BaseBdev2", 00:21:47.373 "uuid": "67732f0d-c8c3-4fc3-8791-85452918f3de", 00:21:47.373 "is_configured": true, 00:21:47.373 "data_offset": 0, 00:21:47.373 "data_size": 65536 00:21:47.373 }, 00:21:47.373 { 00:21:47.373 "name": "BaseBdev3", 00:21:47.373 "uuid": "911ba344-5574-4885-b492-df29c5b969af", 00:21:47.373 "is_configured": true, 00:21:47.373 "data_offset": 0, 00:21:47.373 "data_size": 65536 00:21:47.373 }, 00:21:47.373 { 00:21:47.373 "name": "BaseBdev4", 00:21:47.373 "uuid": "cf62562b-4fbc-4dc7-b9e9-ebb1065e329a", 00:21:47.373 "is_configured": true, 00:21:47.373 "data_offset": 0, 00:21:47.373 "data_size": 65536 00:21:47.373 } 00:21:47.373 ] 00:21:47.373 }' 00:21:47.373 16:37:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:47.373 16:37:24 -- common/autotest_common.sh@10 -- # set +x 00:21:47.940 16:37:24 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:48.199 [2024-07-11 16:37:24.787393] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:48.199 [2024-07-11 16:37:24.787432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:48.199 [2024-07-11 16:37:24.797482] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:21:48.199 [2024-07-11 16:37:24.799156] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:48.199 16:37:24 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:49.137 16:37:25 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:49.137 16:37:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:49.137 16:37:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:49.137 16:37:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:49.137 16:37:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:49.137 16:37:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.137 16:37:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.396 16:37:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.396 "name": "raid_bdev1", 00:21:49.396 "uuid": "f6d4dc94-4da7-42a0-8e60-55afd8329c08", 00:21:49.396 "strip_size_kb": 0, 00:21:49.396 "state": "online", 00:21:49.396 "raid_level": "raid1", 00:21:49.396 "superblock": false, 00:21:49.396 "num_base_bdevs": 4, 00:21:49.396 "num_base_bdevs_discovered": 4, 00:21:49.396 "num_base_bdevs_operational": 4, 00:21:49.396 "process": { 00:21:49.396 "type": "rebuild", 00:21:49.396 "target": "spare", 00:21:49.396 "progress": { 00:21:49.396 "blocks": 24576, 00:21:49.396 "percent": 37 00:21:49.396 } 00:21:49.396 }, 00:21:49.396 "base_bdevs_list": [ 00:21:49.396 { 00:21:49.396 "name": "spare", 00:21:49.396 "uuid": "8082c168-ce4c-5f5f-94c6-0ef05db98938", 00:21:49.396 "is_configured": true, 00:21:49.396 "data_offset": 0, 00:21:49.396 "data_size": 65536 00:21:49.396 }, 00:21:49.396 { 00:21:49.396 "name": "BaseBdev2", 00:21:49.396 "uuid": "67732f0d-c8c3-4fc3-8791-85452918f3de", 00:21:49.396 "is_configured": true, 00:21:49.396 "data_offset": 0, 00:21:49.396 "data_size": 65536 00:21:49.396 }, 00:21:49.396 { 00:21:49.396 "name": "BaseBdev3", 00:21:49.396 "uuid": "911ba344-5574-4885-b492-df29c5b969af", 00:21:49.396 "is_configured": true, 00:21:49.396 "data_offset": 0, 00:21:49.396 "data_size": 65536 00:21:49.396 }, 00:21:49.396 { 00:21:49.396 "name": "BaseBdev4", 00:21:49.396 "uuid": "cf62562b-4fbc-4dc7-b9e9-ebb1065e329a", 00:21:49.396 "is_configured": true, 00:21:49.396 "data_offset": 0, 00:21:49.396 "data_size": 65536 00:21:49.396 } 00:21:49.396 ] 00:21:49.396 }' 00:21:49.396 16:37:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.396 16:37:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:49.396 16:37:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.396 16:37:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:49.396 16:37:26 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:49.655 [2024-07-11 16:37:26.357671] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:49.655 [2024-07-11 16:37:26.407490] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:49.655 [2024-07-11 16:37:26.407615] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.655 16:37:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.914 16:37:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:49.914 "name": "raid_bdev1", 00:21:49.914 "uuid": "f6d4dc94-4da7-42a0-8e60-55afd8329c08", 00:21:49.914 "strip_size_kb": 0, 00:21:49.914 "state": "online", 00:21:49.914 "raid_level": "raid1", 00:21:49.914 "superblock": false, 00:21:49.914 "num_base_bdevs": 4, 00:21:49.914 "num_base_bdevs_discovered": 3, 00:21:49.914 "num_base_bdevs_operational": 3, 00:21:49.914 "base_bdevs_list": [ 00:21:49.914 { 00:21:49.914 "name": null, 00:21:49.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.914 "is_configured": false, 00:21:49.914 "data_offset": 0, 00:21:49.914 "data_size": 65536 00:21:49.914 }, 00:21:49.914 { 00:21:49.914 "name": "BaseBdev2", 00:21:49.914 "uuid": "67732f0d-c8c3-4fc3-8791-85452918f3de", 00:21:49.914 "is_configured": true, 00:21:49.914 "data_offset": 0, 00:21:49.914 "data_size": 65536 00:21:49.914 }, 00:21:49.914 { 00:21:49.914 "name": "BaseBdev3", 00:21:49.914 "uuid": "911ba344-5574-4885-b492-df29c5b969af", 00:21:49.914 "is_configured": true, 00:21:49.914 "data_offset": 0, 00:21:49.914 "data_size": 65536 00:21:49.914 }, 00:21:49.914 { 00:21:49.914 "name": "BaseBdev4", 00:21:49.914 "uuid": "cf62562b-4fbc-4dc7-b9e9-ebb1065e329a", 00:21:49.914 "is_configured": true, 00:21:49.914 "data_offset": 0, 00:21:49.914 "data_size": 65536 00:21:49.914 } 00:21:49.914 ] 00:21:49.914 }' 00:21:49.914 16:37:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:49.914 16:37:26 -- common/autotest_common.sh@10 -- # set +x 00:21:50.481 16:37:27 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:50.481 16:37:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.481 16:37:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:50.481 16:37:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:50.481 16:37:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.481 16:37:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.481 16:37:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.740 16:37:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.740 "name": "raid_bdev1", 00:21:50.740 "uuid": "f6d4dc94-4da7-42a0-8e60-55afd8329c08", 00:21:50.740 "strip_size_kb": 0, 00:21:50.740 "state": "online", 00:21:50.740 "raid_level": "raid1", 00:21:50.740 "superblock": false, 00:21:50.740 "num_base_bdevs": 4, 00:21:50.740 "num_base_bdevs_discovered": 3, 00:21:50.740 "num_base_bdevs_operational": 3, 00:21:50.740 "base_bdevs_list": [ 00:21:50.740 { 00:21:50.740 "name": null, 00:21:50.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.740 "is_configured": false, 00:21:50.740 "data_offset": 0, 00:21:50.740 "data_size": 65536 00:21:50.740 }, 00:21:50.740 { 00:21:50.740 "name": "BaseBdev2", 00:21:50.740 "uuid": "67732f0d-c8c3-4fc3-8791-85452918f3de", 00:21:50.740 "is_configured": true, 00:21:50.740 "data_offset": 0, 00:21:50.740 "data_size": 65536 00:21:50.740 }, 00:21:50.740 { 00:21:50.740 "name": "BaseBdev3", 00:21:50.740 "uuid": "911ba344-5574-4885-b492-df29c5b969af", 00:21:50.740 "is_configured": true, 00:21:50.740 "data_offset": 0, 00:21:50.740 "data_size": 65536 00:21:50.740 }, 00:21:50.740 { 00:21:50.740 "name": "BaseBdev4", 00:21:50.740 "uuid": "cf62562b-4fbc-4dc7-b9e9-ebb1065e329a", 00:21:50.740 "is_configured": true, 00:21:50.740 "data_offset": 0, 00:21:50.740 "data_size": 65536 00:21:50.740 } 00:21:50.740 ] 00:21:50.740 }' 00:21:50.740 16:37:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:50.999 16:37:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:50.999 16:37:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:50.999 16:37:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:50.999 16:37:27 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:50.999 [2024-07-11 16:37:27.794927] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:50.999 [2024-07-11 16:37:27.794971] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:50.999 [2024-07-11 16:37:27.804785] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b840 00:21:50.999 [2024-07-11 16:37:27.806529] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:51.257 16:37:27 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:52.197 16:37:28 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.197 16:37:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.197 16:37:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:52.197 16:37:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:52.197 16:37:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.197 16:37:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.197 16:37:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.197 16:37:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:52.197 "name": "raid_bdev1", 00:21:52.197 "uuid": "f6d4dc94-4da7-42a0-8e60-55afd8329c08", 00:21:52.197 "strip_size_kb": 0, 00:21:52.197 "state": "online", 00:21:52.197 "raid_level": "raid1", 00:21:52.197 "superblock": false, 00:21:52.197 "num_base_bdevs": 4, 00:21:52.197 "num_base_bdevs_discovered": 4, 00:21:52.197 "num_base_bdevs_operational": 4, 00:21:52.197 "process": { 00:21:52.197 "type": "rebuild", 00:21:52.197 "target": "spare", 00:21:52.197 "progress": { 00:21:52.197 "blocks": 22528, 00:21:52.197 "percent": 34 00:21:52.197 } 00:21:52.197 }, 00:21:52.197 "base_bdevs_list": [ 00:21:52.197 { 00:21:52.197 "name": "spare", 00:21:52.197 "uuid": "8082c168-ce4c-5f5f-94c6-0ef05db98938", 00:21:52.197 "is_configured": true, 00:21:52.197 "data_offset": 0, 00:21:52.197 "data_size": 65536 00:21:52.197 }, 00:21:52.197 { 00:21:52.197 "name": "BaseBdev2", 00:21:52.197 "uuid": "67732f0d-c8c3-4fc3-8791-85452918f3de", 00:21:52.197 "is_configured": true, 00:21:52.197 "data_offset": 0, 00:21:52.197 "data_size": 65536 00:21:52.197 }, 00:21:52.197 { 00:21:52.197 "name": "BaseBdev3", 00:21:52.197 "uuid": "911ba344-5574-4885-b492-df29c5b969af", 00:21:52.197 "is_configured": true, 00:21:52.197 "data_offset": 0, 00:21:52.197 "data_size": 65536 00:21:52.197 }, 00:21:52.197 { 00:21:52.197 "name": "BaseBdev4", 00:21:52.197 "uuid": "cf62562b-4fbc-4dc7-b9e9-ebb1065e329a", 00:21:52.197 "is_configured": true, 00:21:52.197 "data_offset": 0, 00:21:52.197 "data_size": 65536 00:21:52.197 } 00:21:52.197 ] 00:21:52.197 }' 00:21:52.197 16:37:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:52.456 16:37:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.456 16:37:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:52.456 16:37:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.456 16:37:29 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:52.456 16:37:29 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:52.456 16:37:29 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:52.456 16:37:29 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:52.456 16:37:29 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:52.715 [2024-07-11 16:37:29.329001] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:52.715 [2024-07-11 16:37:29.414824] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0b840 00:21:52.715 16:37:29 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:52.715 16:37:29 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:52.715 16:37:29 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.715 16:37:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:52.715 16:37:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:52.715 16:37:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:52.715 16:37:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:52.715 16:37:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.715 16:37:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.974 16:37:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:52.974 "name": "raid_bdev1", 00:21:52.974 "uuid": "f6d4dc94-4da7-42a0-8e60-55afd8329c08", 00:21:52.974 "strip_size_kb": 0, 00:21:52.974 "state": "online", 00:21:52.974 "raid_level": "raid1", 00:21:52.974 "superblock": false, 00:21:52.974 "num_base_bdevs": 4, 00:21:52.974 "num_base_bdevs_discovered": 3, 00:21:52.974 "num_base_bdevs_operational": 3, 00:21:52.974 "process": { 00:21:52.974 "type": "rebuild", 00:21:52.974 "target": "spare", 00:21:52.974 "progress": { 00:21:52.974 "blocks": 36864, 00:21:52.974 "percent": 56 00:21:52.974 } 00:21:52.974 }, 00:21:52.974 "base_bdevs_list": [ 00:21:52.974 { 00:21:52.974 "name": "spare", 00:21:52.974 "uuid": "8082c168-ce4c-5f5f-94c6-0ef05db98938", 00:21:52.974 "is_configured": true, 00:21:52.974 "data_offset": 0, 00:21:52.974 "data_size": 65536 00:21:52.974 }, 00:21:52.974 { 00:21:52.974 "name": null, 00:21:52.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.974 "is_configured": false, 00:21:52.974 "data_offset": 0, 00:21:52.974 "data_size": 65536 00:21:52.974 }, 00:21:52.974 { 00:21:52.974 "name": "BaseBdev3", 00:21:52.974 "uuid": "911ba344-5574-4885-b492-df29c5b969af", 00:21:52.974 "is_configured": true, 00:21:52.974 "data_offset": 0, 00:21:52.974 "data_size": 65536 00:21:52.974 }, 00:21:52.974 { 00:21:52.974 "name": "BaseBdev4", 00:21:52.974 "uuid": "cf62562b-4fbc-4dc7-b9e9-ebb1065e329a", 00:21:52.974 "is_configured": true, 00:21:52.974 "data_offset": 0, 00:21:52.974 "data_size": 65536 00:21:52.974 } 00:21:52.974 ] 00:21:52.974 }' 00:21:52.974 16:37:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:52.974 16:37:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.974 16:37:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:53.233 16:37:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.233 16:37:29 -- bdev/bdev_raid.sh@657 -- # local timeout=465 00:21:53.233 16:37:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:53.233 16:37:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.233 16:37:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:53.233 16:37:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:53.233 16:37:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:53.233 16:37:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:53.233 16:37:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.233 16:37:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.234 16:37:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:53.234 "name": "raid_bdev1", 00:21:53.234 "uuid": "f6d4dc94-4da7-42a0-8e60-55afd8329c08", 00:21:53.234 "strip_size_kb": 0, 00:21:53.234 "state": "online", 00:21:53.234 "raid_level": "raid1", 00:21:53.234 "superblock": false, 00:21:53.234 "num_base_bdevs": 4, 00:21:53.234 "num_base_bdevs_discovered": 3, 00:21:53.234 "num_base_bdevs_operational": 3, 00:21:53.234 "process": { 00:21:53.234 "type": "rebuild", 00:21:53.234 "target": "spare", 00:21:53.234 "progress": { 00:21:53.234 "blocks": 43008, 00:21:53.234 "percent": 65 00:21:53.234 } 00:21:53.234 }, 00:21:53.234 "base_bdevs_list": [ 00:21:53.234 { 00:21:53.234 "name": "spare", 00:21:53.234 "uuid": "8082c168-ce4c-5f5f-94c6-0ef05db98938", 00:21:53.234 "is_configured": true, 00:21:53.234 "data_offset": 0, 00:21:53.234 "data_size": 65536 00:21:53.234 }, 00:21:53.234 { 00:21:53.234 "name": null, 00:21:53.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.234 "is_configured": false, 00:21:53.234 "data_offset": 0, 00:21:53.234 "data_size": 65536 00:21:53.234 }, 00:21:53.234 { 00:21:53.234 "name": "BaseBdev3", 00:21:53.234 "uuid": "911ba344-5574-4885-b492-df29c5b969af", 00:21:53.234 "is_configured": true, 00:21:53.234 "data_offset": 0, 00:21:53.234 "data_size": 65536 00:21:53.234 }, 00:21:53.234 { 00:21:53.234 "name": "BaseBdev4", 00:21:53.234 "uuid": "cf62562b-4fbc-4dc7-b9e9-ebb1065e329a", 00:21:53.234 "is_configured": true, 00:21:53.234 "data_offset": 0, 00:21:53.234 "data_size": 65536 00:21:53.234 } 00:21:53.234 ] 00:21:53.234 }' 00:21:53.234 16:37:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:53.492 16:37:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.492 16:37:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:53.492 16:37:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.492 16:37:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:54.429 [2024-07-11 16:37:31.023065] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:54.429 [2024-07-11 16:37:31.023133] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:54.429 [2024-07-11 16:37:31.023215] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.429 16:37:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:54.429 16:37:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.429 16:37:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:54.429 16:37:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:54.429 16:37:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:54.429 16:37:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:54.429 16:37:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.429 16:37:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:54.688 "name": "raid_bdev1", 00:21:54.688 "uuid": "f6d4dc94-4da7-42a0-8e60-55afd8329c08", 00:21:54.688 "strip_size_kb": 0, 00:21:54.688 "state": "online", 00:21:54.688 "raid_level": "raid1", 00:21:54.688 "superblock": false, 00:21:54.688 "num_base_bdevs": 4, 00:21:54.688 "num_base_bdevs_discovered": 3, 00:21:54.688 "num_base_bdevs_operational": 3, 00:21:54.688 "base_bdevs_list": [ 00:21:54.688 { 00:21:54.688 "name": "spare", 00:21:54.688 "uuid": "8082c168-ce4c-5f5f-94c6-0ef05db98938", 00:21:54.688 "is_configured": true, 00:21:54.688 "data_offset": 0, 00:21:54.688 "data_size": 65536 00:21:54.688 }, 00:21:54.688 { 00:21:54.688 "name": null, 00:21:54.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.688 "is_configured": false, 00:21:54.688 "data_offset": 0, 00:21:54.688 "data_size": 65536 00:21:54.688 }, 00:21:54.688 { 00:21:54.688 "name": "BaseBdev3", 00:21:54.688 "uuid": "911ba344-5574-4885-b492-df29c5b969af", 00:21:54.688 "is_configured": true, 00:21:54.688 "data_offset": 0, 00:21:54.688 "data_size": 65536 00:21:54.688 }, 00:21:54.688 { 00:21:54.688 "name": "BaseBdev4", 00:21:54.688 "uuid": "cf62562b-4fbc-4dc7-b9e9-ebb1065e329a", 00:21:54.688 "is_configured": true, 00:21:54.688 "data_offset": 0, 00:21:54.688 "data_size": 65536 00:21:54.688 } 00:21:54.688 ] 00:21:54.688 }' 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@660 -- # break 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.688 16:37:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.947 16:37:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:54.947 "name": "raid_bdev1", 00:21:54.947 "uuid": "f6d4dc94-4da7-42a0-8e60-55afd8329c08", 00:21:54.947 "strip_size_kb": 0, 00:21:54.947 "state": "online", 00:21:54.947 "raid_level": "raid1", 00:21:54.947 "superblock": false, 00:21:54.947 "num_base_bdevs": 4, 00:21:54.947 "num_base_bdevs_discovered": 3, 00:21:54.947 "num_base_bdevs_operational": 3, 00:21:54.947 "base_bdevs_list": [ 00:21:54.947 { 00:21:54.947 "name": "spare", 00:21:54.947 "uuid": "8082c168-ce4c-5f5f-94c6-0ef05db98938", 00:21:54.947 "is_configured": true, 00:21:54.947 "data_offset": 0, 00:21:54.947 "data_size": 65536 00:21:54.947 }, 00:21:54.947 { 00:21:54.947 "name": null, 00:21:54.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.947 "is_configured": false, 00:21:54.947 "data_offset": 0, 00:21:54.947 "data_size": 65536 00:21:54.947 }, 00:21:54.947 { 00:21:54.947 "name": "BaseBdev3", 00:21:54.947 "uuid": "911ba344-5574-4885-b492-df29c5b969af", 00:21:54.947 "is_configured": true, 00:21:54.947 "data_offset": 0, 00:21:54.947 "data_size": 65536 00:21:54.947 }, 00:21:54.947 { 00:21:54.947 "name": "BaseBdev4", 00:21:54.947 "uuid": "cf62562b-4fbc-4dc7-b9e9-ebb1065e329a", 00:21:54.947 "is_configured": true, 00:21:54.947 "data_offset": 0, 00:21:54.947 "data_size": 65536 00:21:54.947 } 00:21:54.947 ] 00:21:54.947 }' 00:21:54.947 16:37:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:54.947 16:37:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:54.947 16:37:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.206 16:37:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.465 16:37:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:55.465 "name": "raid_bdev1", 00:21:55.465 "uuid": "f6d4dc94-4da7-42a0-8e60-55afd8329c08", 00:21:55.465 "strip_size_kb": 0, 00:21:55.465 "state": "online", 00:21:55.465 "raid_level": "raid1", 00:21:55.465 "superblock": false, 00:21:55.465 "num_base_bdevs": 4, 00:21:55.465 "num_base_bdevs_discovered": 3, 00:21:55.465 "num_base_bdevs_operational": 3, 00:21:55.465 "base_bdevs_list": [ 00:21:55.465 { 00:21:55.465 "name": "spare", 00:21:55.465 "uuid": "8082c168-ce4c-5f5f-94c6-0ef05db98938", 00:21:55.465 "is_configured": true, 00:21:55.465 "data_offset": 0, 00:21:55.465 "data_size": 65536 00:21:55.465 }, 00:21:55.465 { 00:21:55.465 "name": null, 00:21:55.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.465 "is_configured": false, 00:21:55.465 "data_offset": 0, 00:21:55.465 "data_size": 65536 00:21:55.465 }, 00:21:55.465 { 00:21:55.465 "name": "BaseBdev3", 00:21:55.465 "uuid": "911ba344-5574-4885-b492-df29c5b969af", 00:21:55.465 "is_configured": true, 00:21:55.465 "data_offset": 0, 00:21:55.465 "data_size": 65536 00:21:55.465 }, 00:21:55.465 { 00:21:55.465 "name": "BaseBdev4", 00:21:55.465 "uuid": "cf62562b-4fbc-4dc7-b9e9-ebb1065e329a", 00:21:55.465 "is_configured": true, 00:21:55.465 "data_offset": 0, 00:21:55.465 "data_size": 65536 00:21:55.465 } 00:21:55.465 ] 00:21:55.465 }' 00:21:55.465 16:37:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:55.465 16:37:32 -- common/autotest_common.sh@10 -- # set +x 00:21:56.058 16:37:32 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:56.320 [2024-07-11 16:37:32.881985] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:56.320 [2024-07-11 16:37:32.882020] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:56.320 [2024-07-11 16:37:32.882104] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:56.320 [2024-07-11 16:37:32.882170] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:56.320 [2024-07-11 16:37:32.882180] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:56.320 16:37:32 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:56.320 16:37:32 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.578 16:37:33 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:56.578 16:37:33 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:56.578 16:37:33 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:56.578 16:37:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:56.578 16:37:33 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:56.578 16:37:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:56.578 16:37:33 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:56.578 16:37:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:56.578 16:37:33 -- bdev/nbd_common.sh@12 -- # local i 00:21:56.578 16:37:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:56.578 16:37:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.578 16:37:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:56.578 /dev/nbd0 00:21:56.578 16:37:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:56.856 16:37:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:56.856 16:37:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:21:56.856 16:37:33 -- common/autotest_common.sh@857 -- # local i 00:21:56.856 16:37:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:56.856 16:37:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:56.856 16:37:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:21:56.856 16:37:33 -- common/autotest_common.sh@861 -- # break 00:21:56.856 16:37:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:56.856 16:37:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:56.856 16:37:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.856 1+0 records in 00:21:56.856 1+0 records out 00:21:56.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260008 s, 15.8 MB/s 00:21:56.856 16:37:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.856 16:37:33 -- common/autotest_common.sh@874 -- # size=4096 00:21:56.856 16:37:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.856 16:37:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:56.856 16:37:33 -- common/autotest_common.sh@877 -- # return 0 00:21:56.856 16:37:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.856 16:37:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.856 16:37:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:56.856 /dev/nbd1 00:21:56.856 16:37:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:56.856 16:37:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:56.856 16:37:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:21:56.856 16:37:33 -- common/autotest_common.sh@857 -- # local i 00:21:56.856 16:37:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:21:56.856 16:37:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:21:56.856 16:37:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:21:56.856 16:37:33 -- common/autotest_common.sh@861 -- # break 00:21:56.856 16:37:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:21:56.856 16:37:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:21:56.856 16:37:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.856 1+0 records in 00:21:56.856 1+0 records out 00:21:56.856 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609933 s, 6.7 MB/s 00:21:56.856 16:37:33 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.856 16:37:33 -- common/autotest_common.sh@874 -- # size=4096 00:21:56.856 16:37:33 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.856 16:37:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:21:56.856 16:37:33 -- common/autotest_common.sh@877 -- # return 0 00:21:56.856 16:37:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.856 16:37:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.856 16:37:33 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:57.154 16:37:33 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:57.154 16:37:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:57.154 16:37:33 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:57.154 16:37:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:57.154 16:37:33 -- bdev/nbd_common.sh@51 -- # local i 00:21:57.154 16:37:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:57.154 16:37:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@41 -- # break 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@45 -- # return 0 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:57.412 16:37:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:57.670 16:37:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:57.670 16:37:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:57.670 16:37:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:57.670 16:37:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:57.670 16:37:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.670 16:37:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:57.670 16:37:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:57.670 16:37:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:57.670 16:37:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:57.671 16:37:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:57.929 16:37:34 -- bdev/nbd_common.sh@41 -- # break 00:21:57.929 16:37:34 -- bdev/nbd_common.sh@45 -- # return 0 00:21:57.929 16:37:34 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:57.929 16:37:34 -- bdev/bdev_raid.sh@709 -- # killprocess 127817 00:21:57.929 16:37:34 -- common/autotest_common.sh@926 -- # '[' -z 127817 ']' 00:21:57.929 16:37:34 -- common/autotest_common.sh@930 -- # kill -0 127817 00:21:57.929 16:37:34 -- common/autotest_common.sh@931 -- # uname 00:21:57.929 16:37:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:57.929 16:37:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 127817 00:21:57.929 16:37:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:57.929 16:37:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:57.929 16:37:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 127817' 00:21:57.929 killing process with pid 127817 00:21:57.929 16:37:34 -- common/autotest_common.sh@945 -- # kill 127817 00:21:57.929 Received shutdown signal, test time was about 60.000000 seconds 00:21:57.929 00:21:57.929 Latency(us) 00:21:57.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.929 =================================================================================================================== 00:21:57.929 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:57.929 16:37:34 -- common/autotest_common.sh@950 -- # wait 127817 00:21:57.929 [2024-07-11 16:37:34.503009] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:58.186 [2024-07-11 16:37:34.814852] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:59.120 ************************************ 00:21:59.120 END TEST raid_rebuild_test 00:21:59.120 ************************************ 00:21:59.120 16:37:35 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:59.120 00:21:59.120 real 0m21.717s 00:21:59.120 user 0m30.412s 00:21:59.120 sys 0m3.368s 00:21:59.120 16:37:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.120 16:37:35 -- common/autotest_common.sh@10 -- # set +x 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:21:59.121 16:37:35 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:21:59.121 16:37:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:59.121 16:37:35 -- common/autotest_common.sh@10 -- # set +x 00:21:59.121 ************************************ 00:21:59.121 START TEST raid_rebuild_test_sb 00:21:59.121 ************************************ 00:21:59.121 16:37:35 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@544 -- # raid_pid=128396 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@545 -- # waitforlisten 128396 /var/tmp/spdk-raid.sock 00:21:59.121 16:37:35 -- common/autotest_common.sh@819 -- # '[' -z 128396 ']' 00:21:59.121 16:37:35 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:59.121 16:37:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:59.121 16:37:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:59.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:59.121 16:37:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:59.121 16:37:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:59.121 16:37:35 -- common/autotest_common.sh@10 -- # set +x 00:21:59.121 [2024-07-11 16:37:35.855190] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:59.121 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:59.121 Zero copy mechanism will not be used. 00:21:59.121 [2024-07-11 16:37:35.855353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128396 ] 00:21:59.379 [2024-07-11 16:37:36.018599] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.379 [2024-07-11 16:37:36.176366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.637 [2024-07-11 16:37:36.338113] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.203 16:37:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:00.203 16:37:36 -- common/autotest_common.sh@852 -- # return 0 00:22:00.203 16:37:36 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:00.203 16:37:36 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:00.203 16:37:36 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:00.203 BaseBdev1_malloc 00:22:00.462 16:37:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:00.462 [2024-07-11 16:37:37.182969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:00.462 [2024-07-11 16:37:37.183063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.462 [2024-07-11 16:37:37.183093] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:00.462 [2024-07-11 16:37:37.183131] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.462 [2024-07-11 16:37:37.185200] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.462 [2024-07-11 16:37:37.185279] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:00.462 BaseBdev1 00:22:00.462 16:37:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:00.462 16:37:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:00.462 16:37:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:00.720 BaseBdev2_malloc 00:22:00.720 16:37:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:00.978 [2024-07-11 16:37:37.585928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:00.978 [2024-07-11 16:37:37.586014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.978 [2024-07-11 16:37:37.586054] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:00.978 [2024-07-11 16:37:37.586103] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.978 [2024-07-11 16:37:37.588025] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.978 [2024-07-11 16:37:37.588087] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:00.978 BaseBdev2 00:22:00.978 16:37:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:00.978 16:37:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:00.978 16:37:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:01.237 BaseBdev3_malloc 00:22:01.237 16:37:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:01.237 [2024-07-11 16:37:38.030428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:01.237 [2024-07-11 16:37:38.030516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.237 [2024-07-11 16:37:38.030555] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:01.237 [2024-07-11 16:37:38.030625] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.237 [2024-07-11 16:37:38.032684] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.237 [2024-07-11 16:37:38.032753] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:01.237 BaseBdev3 00:22:01.237 16:37:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:01.237 16:37:38 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:01.237 16:37:38 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:01.496 BaseBdev4_malloc 00:22:01.754 16:37:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:01.754 [2024-07-11 16:37:38.483111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:01.754 [2024-07-11 16:37:38.483207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.754 [2024-07-11 16:37:38.483239] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:01.754 [2024-07-11 16:37:38.483277] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.754 [2024-07-11 16:37:38.485380] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.754 [2024-07-11 16:37:38.485447] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:01.754 BaseBdev4 00:22:01.754 16:37:38 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:02.012 spare_malloc 00:22:02.012 16:37:38 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:02.354 spare_delay 00:22:02.354 16:37:38 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:02.354 [2024-07-11 16:37:39.063615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:02.354 [2024-07-11 16:37:39.063719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.354 [2024-07-11 16:37:39.063750] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:02.354 [2024-07-11 16:37:39.063789] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.354 [2024-07-11 16:37:39.065904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.354 [2024-07-11 16:37:39.065979] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:02.354 spare 00:22:02.354 16:37:39 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:02.612 [2024-07-11 16:37:39.295725] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.612 [2024-07-11 16:37:39.297363] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.612 [2024-07-11 16:37:39.297448] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.612 [2024-07-11 16:37:39.297503] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:02.612 [2024-07-11 16:37:39.297750] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:22:02.612 [2024-07-11 16:37:39.297775] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:02.612 [2024-07-11 16:37:39.297883] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:02.612 [2024-07-11 16:37:39.298212] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:22:02.612 [2024-07-11 16:37:39.298237] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:22:02.612 [2024-07-11 16:37:39.298400] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.612 16:37:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.869 16:37:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:02.870 "name": "raid_bdev1", 00:22:02.870 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:02.870 "strip_size_kb": 0, 00:22:02.870 "state": "online", 00:22:02.870 "raid_level": "raid1", 00:22:02.870 "superblock": true, 00:22:02.870 "num_base_bdevs": 4, 00:22:02.870 "num_base_bdevs_discovered": 4, 00:22:02.870 "num_base_bdevs_operational": 4, 00:22:02.870 "base_bdevs_list": [ 00:22:02.870 { 00:22:02.870 "name": "BaseBdev1", 00:22:02.870 "uuid": "c6701266-6dcd-5336-8290-1d9b6ff06833", 00:22:02.870 "is_configured": true, 00:22:02.870 "data_offset": 2048, 00:22:02.870 "data_size": 63488 00:22:02.870 }, 00:22:02.870 { 00:22:02.870 "name": "BaseBdev2", 00:22:02.870 "uuid": "23699d99-7a67-5da3-b650-c161317fa0c3", 00:22:02.870 "is_configured": true, 00:22:02.870 "data_offset": 2048, 00:22:02.870 "data_size": 63488 00:22:02.870 }, 00:22:02.870 { 00:22:02.870 "name": "BaseBdev3", 00:22:02.870 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:02.870 "is_configured": true, 00:22:02.870 "data_offset": 2048, 00:22:02.870 "data_size": 63488 00:22:02.870 }, 00:22:02.870 { 00:22:02.870 "name": "BaseBdev4", 00:22:02.870 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:02.870 "is_configured": true, 00:22:02.870 "data_offset": 2048, 00:22:02.870 "data_size": 63488 00:22:02.870 } 00:22:02.870 ] 00:22:02.870 }' 00:22:02.870 16:37:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:02.870 16:37:39 -- common/autotest_common.sh@10 -- # set +x 00:22:03.435 16:37:40 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:03.435 16:37:40 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:03.693 [2024-07-11 16:37:40.388049] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:03.693 16:37:40 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:03.693 16:37:40 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.693 16:37:40 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:03.951 16:37:40 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:03.951 16:37:40 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:03.951 16:37:40 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:03.951 16:37:40 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:03.951 16:37:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:03.951 16:37:40 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:03.951 16:37:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:03.951 16:37:40 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:03.951 16:37:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:03.951 16:37:40 -- bdev/nbd_common.sh@12 -- # local i 00:22:03.951 16:37:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:03.951 16:37:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:03.951 16:37:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:04.209 [2024-07-11 16:37:40.855983] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:04.209 /dev/nbd0 00:22:04.209 16:37:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:04.209 16:37:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:04.209 16:37:40 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:04.209 16:37:40 -- common/autotest_common.sh@857 -- # local i 00:22:04.209 16:37:40 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:04.209 16:37:40 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:04.209 16:37:40 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:04.210 16:37:40 -- common/autotest_common.sh@861 -- # break 00:22:04.210 16:37:40 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:04.210 16:37:40 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:04.210 16:37:40 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:04.210 1+0 records in 00:22:04.210 1+0 records out 00:22:04.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322569 s, 12.7 MB/s 00:22:04.210 16:37:40 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.210 16:37:40 -- common/autotest_common.sh@874 -- # size=4096 00:22:04.210 16:37:40 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:04.210 16:37:40 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:04.210 16:37:40 -- common/autotest_common.sh@877 -- # return 0 00:22:04.210 16:37:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:04.210 16:37:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:04.210 16:37:40 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:22:04.210 16:37:40 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:22:04.210 16:37:40 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:22:10.770 63488+0 records in 00:22:10.770 63488+0 records out 00:22:10.770 32505856 bytes (33 MB, 31 MiB) copied, 5.77528 s, 5.6 MB/s 00:22:10.770 16:37:46 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@51 -- # local i 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:10.770 16:37:46 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:10.770 [2024-07-11 16:37:46.938625] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:10.770 16:37:47 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:10.770 16:37:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:10.770 16:37:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:10.770 16:37:47 -- bdev/nbd_common.sh@41 -- # break 00:22:10.770 16:37:47 -- bdev/nbd_common.sh@45 -- # return 0 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:10.770 [2024-07-11 16:37:47.262411] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:10.770 "name": "raid_bdev1", 00:22:10.770 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:10.770 "strip_size_kb": 0, 00:22:10.770 "state": "online", 00:22:10.770 "raid_level": "raid1", 00:22:10.770 "superblock": true, 00:22:10.770 "num_base_bdevs": 4, 00:22:10.770 "num_base_bdevs_discovered": 3, 00:22:10.770 "num_base_bdevs_operational": 3, 00:22:10.770 "base_bdevs_list": [ 00:22:10.770 { 00:22:10.770 "name": null, 00:22:10.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.770 "is_configured": false, 00:22:10.770 "data_offset": 2048, 00:22:10.770 "data_size": 63488 00:22:10.770 }, 00:22:10.770 { 00:22:10.770 "name": "BaseBdev2", 00:22:10.770 "uuid": "23699d99-7a67-5da3-b650-c161317fa0c3", 00:22:10.770 "is_configured": true, 00:22:10.770 "data_offset": 2048, 00:22:10.770 "data_size": 63488 00:22:10.770 }, 00:22:10.770 { 00:22:10.770 "name": "BaseBdev3", 00:22:10.770 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:10.770 "is_configured": true, 00:22:10.770 "data_offset": 2048, 00:22:10.770 "data_size": 63488 00:22:10.770 }, 00:22:10.770 { 00:22:10.770 "name": "BaseBdev4", 00:22:10.770 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:10.770 "is_configured": true, 00:22:10.770 "data_offset": 2048, 00:22:10.770 "data_size": 63488 00:22:10.770 } 00:22:10.770 ] 00:22:10.770 }' 00:22:10.770 16:37:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:10.770 16:37:47 -- common/autotest_common.sh@10 -- # set +x 00:22:11.337 16:37:48 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:11.596 [2024-07-11 16:37:48.250574] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:11.596 [2024-07-11 16:37:48.250626] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:11.596 [2024-07-11 16:37:48.260647] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4610 00:22:11.596 [2024-07-11 16:37:48.262445] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:11.596 16:37:48 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:12.531 16:37:49 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.531 16:37:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:12.531 16:37:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:12.531 16:37:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:12.531 16:37:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:12.531 16:37:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.531 16:37:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.790 16:37:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:12.790 "name": "raid_bdev1", 00:22:12.790 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:12.790 "strip_size_kb": 0, 00:22:12.790 "state": "online", 00:22:12.790 "raid_level": "raid1", 00:22:12.790 "superblock": true, 00:22:12.790 "num_base_bdevs": 4, 00:22:12.790 "num_base_bdevs_discovered": 4, 00:22:12.790 "num_base_bdevs_operational": 4, 00:22:12.790 "process": { 00:22:12.790 "type": "rebuild", 00:22:12.790 "target": "spare", 00:22:12.790 "progress": { 00:22:12.790 "blocks": 24576, 00:22:12.790 "percent": 38 00:22:12.790 } 00:22:12.790 }, 00:22:12.790 "base_bdevs_list": [ 00:22:12.790 { 00:22:12.790 "name": "spare", 00:22:12.790 "uuid": "c90e3dc4-1900-5923-b459-a3d31acf29ef", 00:22:12.790 "is_configured": true, 00:22:12.790 "data_offset": 2048, 00:22:12.790 "data_size": 63488 00:22:12.790 }, 00:22:12.790 { 00:22:12.790 "name": "BaseBdev2", 00:22:12.790 "uuid": "23699d99-7a67-5da3-b650-c161317fa0c3", 00:22:12.790 "is_configured": true, 00:22:12.790 "data_offset": 2048, 00:22:12.790 "data_size": 63488 00:22:12.790 }, 00:22:12.790 { 00:22:12.790 "name": "BaseBdev3", 00:22:12.790 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:12.790 "is_configured": true, 00:22:12.790 "data_offset": 2048, 00:22:12.790 "data_size": 63488 00:22:12.790 }, 00:22:12.790 { 00:22:12.790 "name": "BaseBdev4", 00:22:12.790 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:12.790 "is_configured": true, 00:22:12.790 "data_offset": 2048, 00:22:12.790 "data_size": 63488 00:22:12.790 } 00:22:12.790 ] 00:22:12.790 }' 00:22:12.790 16:37:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:12.790 16:37:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.790 16:37:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:13.049 16:37:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:13.050 [2024-07-11 16:37:49.765070] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:13.050 [2024-07-11 16:37:49.770551] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:13.050 [2024-07-11 16:37:49.770638] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.050 16:37:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.309 16:37:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:13.309 "name": "raid_bdev1", 00:22:13.309 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:13.309 "strip_size_kb": 0, 00:22:13.309 "state": "online", 00:22:13.309 "raid_level": "raid1", 00:22:13.309 "superblock": true, 00:22:13.309 "num_base_bdevs": 4, 00:22:13.309 "num_base_bdevs_discovered": 3, 00:22:13.309 "num_base_bdevs_operational": 3, 00:22:13.309 "base_bdevs_list": [ 00:22:13.309 { 00:22:13.309 "name": null, 00:22:13.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.309 "is_configured": false, 00:22:13.309 "data_offset": 2048, 00:22:13.309 "data_size": 63488 00:22:13.309 }, 00:22:13.309 { 00:22:13.309 "name": "BaseBdev2", 00:22:13.309 "uuid": "23699d99-7a67-5da3-b650-c161317fa0c3", 00:22:13.309 "is_configured": true, 00:22:13.309 "data_offset": 2048, 00:22:13.309 "data_size": 63488 00:22:13.309 }, 00:22:13.309 { 00:22:13.309 "name": "BaseBdev3", 00:22:13.309 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:13.309 "is_configured": true, 00:22:13.309 "data_offset": 2048, 00:22:13.309 "data_size": 63488 00:22:13.309 }, 00:22:13.309 { 00:22:13.309 "name": "BaseBdev4", 00:22:13.309 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:13.309 "is_configured": true, 00:22:13.309 "data_offset": 2048, 00:22:13.309 "data_size": 63488 00:22:13.309 } 00:22:13.309 ] 00:22:13.309 }' 00:22:13.309 16:37:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:13.309 16:37:50 -- common/autotest_common.sh@10 -- # set +x 00:22:14.245 16:37:50 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:14.245 16:37:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:14.245 16:37:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:14.245 16:37:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:14.245 16:37:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:14.245 16:37:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.245 16:37:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.245 16:37:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:14.245 "name": "raid_bdev1", 00:22:14.245 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:14.245 "strip_size_kb": 0, 00:22:14.245 "state": "online", 00:22:14.245 "raid_level": "raid1", 00:22:14.245 "superblock": true, 00:22:14.245 "num_base_bdevs": 4, 00:22:14.245 "num_base_bdevs_discovered": 3, 00:22:14.245 "num_base_bdevs_operational": 3, 00:22:14.245 "base_bdevs_list": [ 00:22:14.245 { 00:22:14.245 "name": null, 00:22:14.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.245 "is_configured": false, 00:22:14.245 "data_offset": 2048, 00:22:14.245 "data_size": 63488 00:22:14.245 }, 00:22:14.245 { 00:22:14.245 "name": "BaseBdev2", 00:22:14.245 "uuid": "23699d99-7a67-5da3-b650-c161317fa0c3", 00:22:14.245 "is_configured": true, 00:22:14.245 "data_offset": 2048, 00:22:14.245 "data_size": 63488 00:22:14.245 }, 00:22:14.245 { 00:22:14.245 "name": "BaseBdev3", 00:22:14.245 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:14.245 "is_configured": true, 00:22:14.245 "data_offset": 2048, 00:22:14.245 "data_size": 63488 00:22:14.245 }, 00:22:14.245 { 00:22:14.245 "name": "BaseBdev4", 00:22:14.245 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:14.245 "is_configured": true, 00:22:14.245 "data_offset": 2048, 00:22:14.245 "data_size": 63488 00:22:14.245 } 00:22:14.245 ] 00:22:14.245 }' 00:22:14.245 16:37:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:14.245 16:37:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:14.245 16:37:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:14.245 16:37:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:14.245 16:37:51 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:14.503 [2024-07-11 16:37:51.237747] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:14.503 [2024-07-11 16:37:51.237790] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:14.503 [2024-07-11 16:37:51.247538] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca47b0 00:22:14.503 [2024-07-11 16:37:51.249309] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:14.503 16:37:51 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:15.880 "name": "raid_bdev1", 00:22:15.880 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:15.880 "strip_size_kb": 0, 00:22:15.880 "state": "online", 00:22:15.880 "raid_level": "raid1", 00:22:15.880 "superblock": true, 00:22:15.880 "num_base_bdevs": 4, 00:22:15.880 "num_base_bdevs_discovered": 4, 00:22:15.880 "num_base_bdevs_operational": 4, 00:22:15.880 "process": { 00:22:15.880 "type": "rebuild", 00:22:15.880 "target": "spare", 00:22:15.880 "progress": { 00:22:15.880 "blocks": 24576, 00:22:15.880 "percent": 38 00:22:15.880 } 00:22:15.880 }, 00:22:15.880 "base_bdevs_list": [ 00:22:15.880 { 00:22:15.880 "name": "spare", 00:22:15.880 "uuid": "c90e3dc4-1900-5923-b459-a3d31acf29ef", 00:22:15.880 "is_configured": true, 00:22:15.880 "data_offset": 2048, 00:22:15.880 "data_size": 63488 00:22:15.880 }, 00:22:15.880 { 00:22:15.880 "name": "BaseBdev2", 00:22:15.880 "uuid": "23699d99-7a67-5da3-b650-c161317fa0c3", 00:22:15.880 "is_configured": true, 00:22:15.880 "data_offset": 2048, 00:22:15.880 "data_size": 63488 00:22:15.880 }, 00:22:15.880 { 00:22:15.880 "name": "BaseBdev3", 00:22:15.880 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:15.880 "is_configured": true, 00:22:15.880 "data_offset": 2048, 00:22:15.880 "data_size": 63488 00:22:15.880 }, 00:22:15.880 { 00:22:15.880 "name": "BaseBdev4", 00:22:15.880 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:15.880 "is_configured": true, 00:22:15.880 "data_offset": 2048, 00:22:15.880 "data_size": 63488 00:22:15.880 } 00:22:15.880 ] 00:22:15.880 }' 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:15.880 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:15.880 16:37:52 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:16.139 [2024-07-11 16:37:52.859746] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:16.397 [2024-07-11 16:37:52.958287] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca47b0 00:22:16.397 16:37:53 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:16.397 16:37:53 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:16.397 16:37:53 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.397 16:37:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:16.397 16:37:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:16.397 16:37:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:16.397 16:37:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:16.397 16:37:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.397 16:37:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:16.655 "name": "raid_bdev1", 00:22:16.655 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:16.655 "strip_size_kb": 0, 00:22:16.655 "state": "online", 00:22:16.655 "raid_level": "raid1", 00:22:16.655 "superblock": true, 00:22:16.655 "num_base_bdevs": 4, 00:22:16.655 "num_base_bdevs_discovered": 3, 00:22:16.655 "num_base_bdevs_operational": 3, 00:22:16.655 "process": { 00:22:16.655 "type": "rebuild", 00:22:16.655 "target": "spare", 00:22:16.655 "progress": { 00:22:16.655 "blocks": 40960, 00:22:16.655 "percent": 64 00:22:16.655 } 00:22:16.655 }, 00:22:16.655 "base_bdevs_list": [ 00:22:16.655 { 00:22:16.655 "name": "spare", 00:22:16.655 "uuid": "c90e3dc4-1900-5923-b459-a3d31acf29ef", 00:22:16.655 "is_configured": true, 00:22:16.655 "data_offset": 2048, 00:22:16.655 "data_size": 63488 00:22:16.655 }, 00:22:16.655 { 00:22:16.655 "name": null, 00:22:16.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.655 "is_configured": false, 00:22:16.655 "data_offset": 2048, 00:22:16.655 "data_size": 63488 00:22:16.655 }, 00:22:16.655 { 00:22:16.655 "name": "BaseBdev3", 00:22:16.655 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:16.655 "is_configured": true, 00:22:16.655 "data_offset": 2048, 00:22:16.655 "data_size": 63488 00:22:16.655 }, 00:22:16.655 { 00:22:16.655 "name": "BaseBdev4", 00:22:16.655 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:16.655 "is_configured": true, 00:22:16.655 "data_offset": 2048, 00:22:16.655 "data_size": 63488 00:22:16.655 } 00:22:16.655 ] 00:22:16.655 }' 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@657 -- # local timeout=489 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.655 16:37:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.912 16:37:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:16.912 "name": "raid_bdev1", 00:22:16.912 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:16.912 "strip_size_kb": 0, 00:22:16.912 "state": "online", 00:22:16.912 "raid_level": "raid1", 00:22:16.912 "superblock": true, 00:22:16.912 "num_base_bdevs": 4, 00:22:16.912 "num_base_bdevs_discovered": 3, 00:22:16.912 "num_base_bdevs_operational": 3, 00:22:16.912 "process": { 00:22:16.912 "type": "rebuild", 00:22:16.912 "target": "spare", 00:22:16.912 "progress": { 00:22:16.912 "blocks": 49152, 00:22:16.912 "percent": 77 00:22:16.912 } 00:22:16.912 }, 00:22:16.912 "base_bdevs_list": [ 00:22:16.912 { 00:22:16.912 "name": "spare", 00:22:16.912 "uuid": "c90e3dc4-1900-5923-b459-a3d31acf29ef", 00:22:16.912 "is_configured": true, 00:22:16.912 "data_offset": 2048, 00:22:16.912 "data_size": 63488 00:22:16.912 }, 00:22:16.912 { 00:22:16.912 "name": null, 00:22:16.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.912 "is_configured": false, 00:22:16.912 "data_offset": 2048, 00:22:16.912 "data_size": 63488 00:22:16.912 }, 00:22:16.912 { 00:22:16.912 "name": "BaseBdev3", 00:22:16.912 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:16.912 "is_configured": true, 00:22:16.912 "data_offset": 2048, 00:22:16.912 "data_size": 63488 00:22:16.912 }, 00:22:16.912 { 00:22:16.912 "name": "BaseBdev4", 00:22:16.912 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:16.912 "is_configured": true, 00:22:16.912 "data_offset": 2048, 00:22:16.912 "data_size": 63488 00:22:16.912 } 00:22:16.912 ] 00:22:16.912 }' 00:22:16.912 16:37:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:17.170 16:37:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:17.170 16:37:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:17.170 16:37:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:17.170 16:37:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:17.737 [2024-07-11 16:37:54.365937] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:17.737 [2024-07-11 16:37:54.366003] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:17.737 [2024-07-11 16:37:54.366146] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.303 16:37:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:18.303 16:37:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:18.303 16:37:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:18.303 16:37:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:18.303 16:37:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:18.303 16:37:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:18.303 16:37:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.303 16:37:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.303 16:37:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:18.303 "name": "raid_bdev1", 00:22:18.303 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:18.303 "strip_size_kb": 0, 00:22:18.303 "state": "online", 00:22:18.303 "raid_level": "raid1", 00:22:18.303 "superblock": true, 00:22:18.303 "num_base_bdevs": 4, 00:22:18.303 "num_base_bdevs_discovered": 3, 00:22:18.303 "num_base_bdevs_operational": 3, 00:22:18.303 "base_bdevs_list": [ 00:22:18.303 { 00:22:18.303 "name": "spare", 00:22:18.303 "uuid": "c90e3dc4-1900-5923-b459-a3d31acf29ef", 00:22:18.303 "is_configured": true, 00:22:18.303 "data_offset": 2048, 00:22:18.303 "data_size": 63488 00:22:18.303 }, 00:22:18.303 { 00:22:18.303 "name": null, 00:22:18.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.303 "is_configured": false, 00:22:18.303 "data_offset": 2048, 00:22:18.303 "data_size": 63488 00:22:18.303 }, 00:22:18.303 { 00:22:18.303 "name": "BaseBdev3", 00:22:18.303 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:18.303 "is_configured": true, 00:22:18.303 "data_offset": 2048, 00:22:18.303 "data_size": 63488 00:22:18.303 }, 00:22:18.303 { 00:22:18.303 "name": "BaseBdev4", 00:22:18.303 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:18.303 "is_configured": true, 00:22:18.303 "data_offset": 2048, 00:22:18.303 "data_size": 63488 00:22:18.303 } 00:22:18.303 ] 00:22:18.303 }' 00:22:18.303 16:37:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:18.303 16:37:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:18.303 16:37:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@660 -- # break 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:18.561 "name": "raid_bdev1", 00:22:18.561 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:18.561 "strip_size_kb": 0, 00:22:18.561 "state": "online", 00:22:18.561 "raid_level": "raid1", 00:22:18.561 "superblock": true, 00:22:18.561 "num_base_bdevs": 4, 00:22:18.561 "num_base_bdevs_discovered": 3, 00:22:18.561 "num_base_bdevs_operational": 3, 00:22:18.561 "base_bdevs_list": [ 00:22:18.561 { 00:22:18.561 "name": "spare", 00:22:18.561 "uuid": "c90e3dc4-1900-5923-b459-a3d31acf29ef", 00:22:18.561 "is_configured": true, 00:22:18.561 "data_offset": 2048, 00:22:18.561 "data_size": 63488 00:22:18.561 }, 00:22:18.561 { 00:22:18.561 "name": null, 00:22:18.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.561 "is_configured": false, 00:22:18.561 "data_offset": 2048, 00:22:18.561 "data_size": 63488 00:22:18.561 }, 00:22:18.561 { 00:22:18.561 "name": "BaseBdev3", 00:22:18.561 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:18.561 "is_configured": true, 00:22:18.561 "data_offset": 2048, 00:22:18.561 "data_size": 63488 00:22:18.561 }, 00:22:18.561 { 00:22:18.561 "name": "BaseBdev4", 00:22:18.561 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:18.561 "is_configured": true, 00:22:18.561 "data_offset": 2048, 00:22:18.561 "data_size": 63488 00:22:18.561 } 00:22:18.561 ] 00:22:18.561 }' 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:18.561 16:37:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:18.818 16:37:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.819 16:37:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:18.819 16:37:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:18.819 "name": "raid_bdev1", 00:22:18.819 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:18.819 "strip_size_kb": 0, 00:22:18.819 "state": "online", 00:22:18.819 "raid_level": "raid1", 00:22:18.819 "superblock": true, 00:22:18.819 "num_base_bdevs": 4, 00:22:18.819 "num_base_bdevs_discovered": 3, 00:22:18.819 "num_base_bdevs_operational": 3, 00:22:18.819 "base_bdevs_list": [ 00:22:18.819 { 00:22:18.819 "name": "spare", 00:22:18.819 "uuid": "c90e3dc4-1900-5923-b459-a3d31acf29ef", 00:22:18.819 "is_configured": true, 00:22:18.819 "data_offset": 2048, 00:22:18.819 "data_size": 63488 00:22:18.819 }, 00:22:18.819 { 00:22:18.819 "name": null, 00:22:18.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:18.819 "is_configured": false, 00:22:18.819 "data_offset": 2048, 00:22:18.819 "data_size": 63488 00:22:18.819 }, 00:22:18.819 { 00:22:18.819 "name": "BaseBdev3", 00:22:18.819 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:18.819 "is_configured": true, 00:22:18.819 "data_offset": 2048, 00:22:18.819 "data_size": 63488 00:22:18.819 }, 00:22:18.819 { 00:22:18.819 "name": "BaseBdev4", 00:22:18.819 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:18.819 "is_configured": true, 00:22:18.819 "data_offset": 2048, 00:22:18.819 "data_size": 63488 00:22:18.819 } 00:22:18.819 ] 00:22:18.819 }' 00:22:18.819 16:37:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:18.819 16:37:55 -- common/autotest_common.sh@10 -- # set +x 00:22:19.753 16:37:56 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:19.753 [2024-07-11 16:37:56.485094] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:19.753 [2024-07-11 16:37:56.485129] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:19.753 [2024-07-11 16:37:56.485203] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:19.753 [2024-07-11 16:37:56.485285] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:19.753 [2024-07-11 16:37:56.485297] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:22:19.753 16:37:56 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:19.753 16:37:56 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.012 16:37:56 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:20.012 16:37:56 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:20.012 16:37:56 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:20.012 16:37:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:20.012 16:37:56 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:20.012 16:37:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:20.012 16:37:56 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:20.012 16:37:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:20.012 16:37:56 -- bdev/nbd_common.sh@12 -- # local i 00:22:20.012 16:37:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:20.012 16:37:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:20.012 16:37:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:20.271 /dev/nbd0 00:22:20.271 16:37:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:20.271 16:37:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:20.271 16:37:56 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:20.271 16:37:56 -- common/autotest_common.sh@857 -- # local i 00:22:20.271 16:37:56 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:20.271 16:37:56 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:20.271 16:37:56 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:20.271 16:37:56 -- common/autotest_common.sh@861 -- # break 00:22:20.271 16:37:56 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:20.271 16:37:56 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:20.271 16:37:56 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:20.271 1+0 records in 00:22:20.271 1+0 records out 00:22:20.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018388 s, 22.3 MB/s 00:22:20.271 16:37:56 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.271 16:37:56 -- common/autotest_common.sh@874 -- # size=4096 00:22:20.271 16:37:56 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.271 16:37:56 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:20.271 16:37:56 -- common/autotest_common.sh@877 -- # return 0 00:22:20.271 16:37:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:20.271 16:37:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:20.271 16:37:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:20.530 /dev/nbd1 00:22:20.530 16:37:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:20.530 16:37:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:20.530 16:37:57 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:20.530 16:37:57 -- common/autotest_common.sh@857 -- # local i 00:22:20.530 16:37:57 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:20.530 16:37:57 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:20.530 16:37:57 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:20.530 16:37:57 -- common/autotest_common.sh@861 -- # break 00:22:20.530 16:37:57 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:20.530 16:37:57 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:20.530 16:37:57 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:20.530 1+0 records in 00:22:20.530 1+0 records out 00:22:20.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339247 s, 12.1 MB/s 00:22:20.530 16:37:57 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.530 16:37:57 -- common/autotest_common.sh@874 -- # size=4096 00:22:20.530 16:37:57 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.530 16:37:57 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:20.530 16:37:57 -- common/autotest_common.sh@877 -- # return 0 00:22:20.530 16:37:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:20.530 16:37:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:20.530 16:37:57 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:20.530 16:37:57 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:20.530 16:37:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:20.530 16:37:57 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:20.530 16:37:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:20.530 16:37:57 -- bdev/nbd_common.sh@51 -- # local i 00:22:20.530 16:37:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:20.530 16:37:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:20.789 16:37:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:20.789 16:37:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:20.789 16:37:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:20.789 16:37:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:20.789 16:37:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:20.789 16:37:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:20.789 16:37:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:21.047 16:37:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:21.047 16:37:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.047 16:37:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:21.047 16:37:57 -- bdev/nbd_common.sh@41 -- # break 00:22:21.047 16:37:57 -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.047 16:37:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:21.047 16:37:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@41 -- # break 00:22:21.306 16:37:57 -- bdev/nbd_common.sh@45 -- # return 0 00:22:21.306 16:37:57 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:21.306 16:37:57 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:21.306 16:37:57 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:21.306 16:37:57 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:21.565 16:37:58 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:21.824 [2024-07-11 16:37:58.481719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:21.824 [2024-07-11 16:37:58.481809] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.824 [2024-07-11 16:37:58.481849] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:22:21.824 [2024-07-11 16:37:58.481871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.824 [2024-07-11 16:37:58.483911] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.824 [2024-07-11 16:37:58.483989] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:21.824 [2024-07-11 16:37:58.484111] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:21.824 [2024-07-11 16:37:58.484198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:21.825 BaseBdev1 00:22:21.825 16:37:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:21.825 16:37:58 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:21.825 16:37:58 -- bdev/bdev_raid.sh@696 -- # continue 00:22:21.825 16:37:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:21.825 16:37:58 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:21.825 16:37:58 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:22.083 16:37:58 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:22.342 [2024-07-11 16:37:58.953791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:22.342 [2024-07-11 16:37:58.953875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.342 [2024-07-11 16:37:58.953910] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:22:22.342 [2024-07-11 16:37:58.953929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.342 [2024-07-11 16:37:58.954339] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.342 [2024-07-11 16:37:58.954405] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:22.342 [2024-07-11 16:37:58.954526] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:22.342 [2024-07-11 16:37:58.954539] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:22.342 [2024-07-11 16:37:58.954547] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:22.342 [2024-07-11 16:37:58.954571] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:22:22.342 [2024-07-11 16:37:58.954647] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:22.342 BaseBdev3 00:22:22.342 16:37:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:22.342 16:37:58 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:22.342 16:37:58 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:22.342 16:37:59 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:22.600 [2024-07-11 16:37:59.313840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:22.600 [2024-07-11 16:37:59.313915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.600 [2024-07-11 16:37:59.313944] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:22:22.600 [2024-07-11 16:37:59.313967] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.600 [2024-07-11 16:37:59.314371] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.600 [2024-07-11 16:37:59.314436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:22.600 [2024-07-11 16:37:59.314550] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:22.600 [2024-07-11 16:37:59.314575] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:22.600 BaseBdev4 00:22:22.600 16:37:59 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:22.859 16:37:59 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:23.185 [2024-07-11 16:37:59.681899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:23.185 [2024-07-11 16:37:59.681975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.185 [2024-07-11 16:37:59.682005] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:23.185 [2024-07-11 16:37:59.682031] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.185 [2024-07-11 16:37:59.682474] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.185 [2024-07-11 16:37:59.682532] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:23.185 [2024-07-11 16:37:59.682634] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:23.185 [2024-07-11 16:37:59.682674] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:23.185 spare 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.185 [2024-07-11 16:37:59.782783] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:22:23.185 [2024-07-11 16:37:59.782805] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:23.185 [2024-07-11 16:37:59.782941] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc4860 00:22:23.185 [2024-07-11 16:37:59.783349] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:22:23.185 [2024-07-11 16:37:59.783374] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:22:23.185 [2024-07-11 16:37:59.783513] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:23.185 "name": "raid_bdev1", 00:22:23.185 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:23.185 "strip_size_kb": 0, 00:22:23.185 "state": "online", 00:22:23.185 "raid_level": "raid1", 00:22:23.185 "superblock": true, 00:22:23.185 "num_base_bdevs": 4, 00:22:23.185 "num_base_bdevs_discovered": 3, 00:22:23.185 "num_base_bdevs_operational": 3, 00:22:23.185 "base_bdevs_list": [ 00:22:23.185 { 00:22:23.185 "name": "spare", 00:22:23.185 "uuid": "c90e3dc4-1900-5923-b459-a3d31acf29ef", 00:22:23.185 "is_configured": true, 00:22:23.185 "data_offset": 2048, 00:22:23.185 "data_size": 63488 00:22:23.185 }, 00:22:23.185 { 00:22:23.185 "name": null, 00:22:23.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.185 "is_configured": false, 00:22:23.185 "data_offset": 2048, 00:22:23.185 "data_size": 63488 00:22:23.185 }, 00:22:23.185 { 00:22:23.185 "name": "BaseBdev3", 00:22:23.185 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:23.185 "is_configured": true, 00:22:23.185 "data_offset": 2048, 00:22:23.185 "data_size": 63488 00:22:23.185 }, 00:22:23.185 { 00:22:23.185 "name": "BaseBdev4", 00:22:23.185 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:23.185 "is_configured": true, 00:22:23.185 "data_offset": 2048, 00:22:23.185 "data_size": 63488 00:22:23.185 } 00:22:23.185 ] 00:22:23.185 }' 00:22:23.185 16:37:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:23.185 16:37:59 -- common/autotest_common.sh@10 -- # set +x 00:22:23.777 16:38:00 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:23.777 16:38:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:23.777 16:38:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:23.777 16:38:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:23.777 16:38:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:23.777 16:38:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.777 16:38:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.035 16:38:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:24.035 "name": "raid_bdev1", 00:22:24.035 "uuid": "d62e090a-627f-4791-a74c-786bdb6842a5", 00:22:24.035 "strip_size_kb": 0, 00:22:24.035 "state": "online", 00:22:24.035 "raid_level": "raid1", 00:22:24.035 "superblock": true, 00:22:24.035 "num_base_bdevs": 4, 00:22:24.035 "num_base_bdevs_discovered": 3, 00:22:24.035 "num_base_bdevs_operational": 3, 00:22:24.035 "base_bdevs_list": [ 00:22:24.035 { 00:22:24.035 "name": "spare", 00:22:24.035 "uuid": "c90e3dc4-1900-5923-b459-a3d31acf29ef", 00:22:24.035 "is_configured": true, 00:22:24.035 "data_offset": 2048, 00:22:24.035 "data_size": 63488 00:22:24.035 }, 00:22:24.035 { 00:22:24.035 "name": null, 00:22:24.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.035 "is_configured": false, 00:22:24.035 "data_offset": 2048, 00:22:24.035 "data_size": 63488 00:22:24.035 }, 00:22:24.035 { 00:22:24.035 "name": "BaseBdev3", 00:22:24.035 "uuid": "ab5a0efb-1405-5481-9302-e47a2257e1ba", 00:22:24.035 "is_configured": true, 00:22:24.035 "data_offset": 2048, 00:22:24.035 "data_size": 63488 00:22:24.035 }, 00:22:24.035 { 00:22:24.035 "name": "BaseBdev4", 00:22:24.035 "uuid": "c0f34b33-dddd-55a0-98b5-3c5aee972128", 00:22:24.035 "is_configured": true, 00:22:24.035 "data_offset": 2048, 00:22:24.035 "data_size": 63488 00:22:24.035 } 00:22:24.035 ] 00:22:24.035 }' 00:22:24.035 16:38:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:24.035 16:38:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:24.035 16:38:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:24.035 16:38:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:24.035 16:38:00 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.035 16:38:00 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:24.293 16:38:00 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.293 16:38:00 -- bdev/bdev_raid.sh@709 -- # killprocess 128396 00:22:24.293 16:38:00 -- common/autotest_common.sh@926 -- # '[' -z 128396 ']' 00:22:24.293 16:38:00 -- common/autotest_common.sh@930 -- # kill -0 128396 00:22:24.293 16:38:00 -- common/autotest_common.sh@931 -- # uname 00:22:24.293 16:38:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:24.293 16:38:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 128396 00:22:24.293 16:38:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:24.293 killing process with pid 128396 00:22:24.293 16:38:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:24.293 16:38:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 128396' 00:22:24.293 16:38:00 -- common/autotest_common.sh@945 -- # kill 128396 00:22:24.293 Received shutdown signal, test time was about 60.000000 seconds 00:22:24.293 00:22:24.293 Latency(us) 00:22:24.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.293 =================================================================================================================== 00:22:24.293 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:24.293 16:38:00 -- common/autotest_common.sh@950 -- # wait 128396 00:22:24.293 [2024-07-11 16:38:00.968577] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:24.293 [2024-07-11 16:38:00.968648] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:24.293 [2024-07-11 16:38:00.968737] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:24.293 [2024-07-11 16:38:00.968759] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:22:24.552 [2024-07-11 16:38:01.283552] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:25.487 ************************************ 00:22:25.487 END TEST raid_rebuild_test_sb 00:22:25.487 ************************************ 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:25.487 00:22:25.487 real 0m26.411s 00:22:25.487 user 0m38.830s 00:22:25.487 sys 0m3.517s 00:22:25.487 16:38:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:25.487 16:38:02 -- common/autotest_common.sh@10 -- # set +x 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:22:25.487 16:38:02 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:25.487 16:38:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:25.487 16:38:02 -- common/autotest_common.sh@10 -- # set +x 00:22:25.487 ************************************ 00:22:25.487 START TEST raid_rebuild_test_io 00:22:25.487 ************************************ 00:22:25.487 16:38:02 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@544 -- # raid_pid=129085 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129085 /var/tmp/spdk-raid.sock 00:22:25.487 16:38:02 -- common/autotest_common.sh@819 -- # '[' -z 129085 ']' 00:22:25.487 16:38:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:25.487 16:38:02 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:25.487 16:38:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:25.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:25.487 16:38:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:25.487 16:38:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:25.487 16:38:02 -- common/autotest_common.sh@10 -- # set +x 00:22:25.746 [2024-07-11 16:38:02.324845] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:25.746 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:25.746 Zero copy mechanism will not be used. 00:22:25.746 [2024-07-11 16:38:02.325035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129085 ] 00:22:25.746 [2024-07-11 16:38:02.490047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.004 [2024-07-11 16:38:02.647689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.004 [2024-07-11 16:38:02.810122] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:26.570 16:38:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:26.570 16:38:03 -- common/autotest_common.sh@852 -- # return 0 00:22:26.570 16:38:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:26.570 16:38:03 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:26.570 16:38:03 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:26.828 BaseBdev1 00:22:26.829 16:38:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:26.829 16:38:03 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:26.829 16:38:03 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:27.087 BaseBdev2 00:22:27.087 16:38:03 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:27.087 16:38:03 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:27.087 16:38:03 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:27.344 BaseBdev3 00:22:27.344 16:38:04 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:27.344 16:38:04 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:27.344 16:38:04 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:27.602 BaseBdev4 00:22:27.602 16:38:04 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:27.860 spare_malloc 00:22:27.860 16:38:04 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:28.117 spare_delay 00:22:28.117 16:38:04 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:28.374 [2024-07-11 16:38:04.941638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:28.374 [2024-07-11 16:38:04.941729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:28.374 [2024-07-11 16:38:04.941762] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:28.374 [2024-07-11 16:38:04.941802] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:28.374 [2024-07-11 16:38:04.943821] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:28.374 [2024-07-11 16:38:04.943883] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:28.374 spare 00:22:28.374 16:38:04 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:28.374 [2024-07-11 16:38:05.117700] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:28.374 [2024-07-11 16:38:05.119265] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:28.374 [2024-07-11 16:38:05.119317] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:28.374 [2024-07-11 16:38:05.119352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:28.374 [2024-07-11 16:38:05.119422] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:22:28.374 [2024-07-11 16:38:05.119434] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:28.374 [2024-07-11 16:38:05.119611] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:28.374 [2024-07-11 16:38:05.119931] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:22:28.374 [2024-07-11 16:38:05.119956] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:22:28.374 [2024-07-11 16:38:05.120097] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.374 16:38:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.632 16:38:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:28.632 "name": "raid_bdev1", 00:22:28.632 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:28.632 "strip_size_kb": 0, 00:22:28.632 "state": "online", 00:22:28.632 "raid_level": "raid1", 00:22:28.632 "superblock": false, 00:22:28.632 "num_base_bdevs": 4, 00:22:28.632 "num_base_bdevs_discovered": 4, 00:22:28.632 "num_base_bdevs_operational": 4, 00:22:28.632 "base_bdevs_list": [ 00:22:28.632 { 00:22:28.632 "name": "BaseBdev1", 00:22:28.632 "uuid": "c8927d60-5570-4271-b490-3d8fbbaf071a", 00:22:28.632 "is_configured": true, 00:22:28.632 "data_offset": 0, 00:22:28.632 "data_size": 65536 00:22:28.632 }, 00:22:28.632 { 00:22:28.632 "name": "BaseBdev2", 00:22:28.632 "uuid": "3c09941a-1c6f-4ac6-833a-f4f8ef3dbbae", 00:22:28.632 "is_configured": true, 00:22:28.632 "data_offset": 0, 00:22:28.632 "data_size": 65536 00:22:28.632 }, 00:22:28.632 { 00:22:28.632 "name": "BaseBdev3", 00:22:28.632 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:28.632 "is_configured": true, 00:22:28.632 "data_offset": 0, 00:22:28.632 "data_size": 65536 00:22:28.632 }, 00:22:28.632 { 00:22:28.632 "name": "BaseBdev4", 00:22:28.632 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:28.632 "is_configured": true, 00:22:28.632 "data_offset": 0, 00:22:28.632 "data_size": 65536 00:22:28.632 } 00:22:28.632 ] 00:22:28.632 }' 00:22:28.632 16:38:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:28.632 16:38:05 -- common/autotest_common.sh@10 -- # set +x 00:22:29.196 16:38:05 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:29.196 16:38:05 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:29.454 [2024-07-11 16:38:06.174078] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:29.454 16:38:06 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:29.454 16:38:06 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.454 16:38:06 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:29.711 16:38:06 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:29.711 16:38:06 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:29.711 16:38:06 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:29.711 16:38:06 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:29.711 [2024-07-11 16:38:06.504740] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:29.711 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:29.711 Zero copy mechanism will not be used. 00:22:29.711 Running I/O for 60 seconds... 00:22:29.969 [2024-07-11 16:38:06.604987] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:29.969 [2024-07-11 16:38:06.613220] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.969 16:38:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.227 16:38:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:30.227 "name": "raid_bdev1", 00:22:30.227 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:30.227 "strip_size_kb": 0, 00:22:30.227 "state": "online", 00:22:30.227 "raid_level": "raid1", 00:22:30.227 "superblock": false, 00:22:30.227 "num_base_bdevs": 4, 00:22:30.227 "num_base_bdevs_discovered": 3, 00:22:30.227 "num_base_bdevs_operational": 3, 00:22:30.227 "base_bdevs_list": [ 00:22:30.227 { 00:22:30.227 "name": null, 00:22:30.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.227 "is_configured": false, 00:22:30.227 "data_offset": 0, 00:22:30.227 "data_size": 65536 00:22:30.227 }, 00:22:30.227 { 00:22:30.227 "name": "BaseBdev2", 00:22:30.227 "uuid": "3c09941a-1c6f-4ac6-833a-f4f8ef3dbbae", 00:22:30.227 "is_configured": true, 00:22:30.227 "data_offset": 0, 00:22:30.227 "data_size": 65536 00:22:30.227 }, 00:22:30.227 { 00:22:30.227 "name": "BaseBdev3", 00:22:30.227 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:30.227 "is_configured": true, 00:22:30.227 "data_offset": 0, 00:22:30.227 "data_size": 65536 00:22:30.227 }, 00:22:30.227 { 00:22:30.227 "name": "BaseBdev4", 00:22:30.227 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:30.227 "is_configured": true, 00:22:30.227 "data_offset": 0, 00:22:30.227 "data_size": 65536 00:22:30.227 } 00:22:30.227 ] 00:22:30.227 }' 00:22:30.227 16:38:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:30.227 16:38:06 -- common/autotest_common.sh@10 -- # set +x 00:22:30.794 16:38:07 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:31.052 [2024-07-11 16:38:07.626859] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:31.052 [2024-07-11 16:38:07.626924] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:31.052 16:38:07 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:31.052 [2024-07-11 16:38:07.702710] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:31.052 [2024-07-11 16:38:07.704652] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:31.052 [2024-07-11 16:38:07.815928] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:31.052 [2024-07-11 16:38:07.816453] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:31.310 [2024-07-11 16:38:08.033149] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:31.310 [2024-07-11 16:38:08.033429] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:31.568 [2024-07-11 16:38:08.269343] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:31.568 [2024-07-11 16:38:08.270475] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:31.827 [2024-07-11 16:38:08.502542] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:31.827 [2024-07-11 16:38:08.503318] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:32.087 16:38:08 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.087 16:38:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.087 16:38:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:32.087 16:38:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:32.087 16:38:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.087 16:38:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.087 16:38:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.087 [2024-07-11 16:38:08.837077] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:32.346 16:38:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:32.346 "name": "raid_bdev1", 00:22:32.346 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:32.346 "strip_size_kb": 0, 00:22:32.346 "state": "online", 00:22:32.346 "raid_level": "raid1", 00:22:32.346 "superblock": false, 00:22:32.346 "num_base_bdevs": 4, 00:22:32.346 "num_base_bdevs_discovered": 4, 00:22:32.346 "num_base_bdevs_operational": 4, 00:22:32.346 "process": { 00:22:32.346 "type": "rebuild", 00:22:32.346 "target": "spare", 00:22:32.346 "progress": { 00:22:32.346 "blocks": 14336, 00:22:32.346 "percent": 21 00:22:32.346 } 00:22:32.346 }, 00:22:32.346 "base_bdevs_list": [ 00:22:32.346 { 00:22:32.346 "name": "spare", 00:22:32.346 "uuid": "a34b1403-090b-5bf0-980d-b55d79d7b7da", 00:22:32.346 "is_configured": true, 00:22:32.346 "data_offset": 0, 00:22:32.346 "data_size": 65536 00:22:32.346 }, 00:22:32.346 { 00:22:32.346 "name": "BaseBdev2", 00:22:32.346 "uuid": "3c09941a-1c6f-4ac6-833a-f4f8ef3dbbae", 00:22:32.346 "is_configured": true, 00:22:32.346 "data_offset": 0, 00:22:32.346 "data_size": 65536 00:22:32.346 }, 00:22:32.346 { 00:22:32.346 "name": "BaseBdev3", 00:22:32.346 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:32.346 "is_configured": true, 00:22:32.346 "data_offset": 0, 00:22:32.346 "data_size": 65536 00:22:32.346 }, 00:22:32.346 { 00:22:32.346 "name": "BaseBdev4", 00:22:32.346 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:32.346 "is_configured": true, 00:22:32.346 "data_offset": 0, 00:22:32.346 "data_size": 65536 00:22:32.346 } 00:22:32.346 ] 00:22:32.346 }' 00:22:32.346 16:38:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:32.346 16:38:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.346 16:38:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:32.346 16:38:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.346 16:38:09 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:32.606 [2024-07-11 16:38:09.260741] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:32.865 [2024-07-11 16:38:09.414677] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:32.865 [2024-07-11 16:38:09.426737] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.865 [2024-07-11 16:38:09.455284] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.865 16:38:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.124 16:38:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:33.124 "name": "raid_bdev1", 00:22:33.124 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:33.124 "strip_size_kb": 0, 00:22:33.124 "state": "online", 00:22:33.124 "raid_level": "raid1", 00:22:33.124 "superblock": false, 00:22:33.124 "num_base_bdevs": 4, 00:22:33.124 "num_base_bdevs_discovered": 3, 00:22:33.124 "num_base_bdevs_operational": 3, 00:22:33.124 "base_bdevs_list": [ 00:22:33.124 { 00:22:33.124 "name": null, 00:22:33.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.124 "is_configured": false, 00:22:33.124 "data_offset": 0, 00:22:33.124 "data_size": 65536 00:22:33.124 }, 00:22:33.124 { 00:22:33.124 "name": "BaseBdev2", 00:22:33.124 "uuid": "3c09941a-1c6f-4ac6-833a-f4f8ef3dbbae", 00:22:33.124 "is_configured": true, 00:22:33.124 "data_offset": 0, 00:22:33.124 "data_size": 65536 00:22:33.124 }, 00:22:33.124 { 00:22:33.124 "name": "BaseBdev3", 00:22:33.124 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:33.124 "is_configured": true, 00:22:33.124 "data_offset": 0, 00:22:33.124 "data_size": 65536 00:22:33.124 }, 00:22:33.124 { 00:22:33.124 "name": "BaseBdev4", 00:22:33.124 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:33.124 "is_configured": true, 00:22:33.124 "data_offset": 0, 00:22:33.124 "data_size": 65536 00:22:33.124 } 00:22:33.124 ] 00:22:33.124 }' 00:22:33.124 16:38:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:33.124 16:38:09 -- common/autotest_common.sh@10 -- # set +x 00:22:33.691 16:38:10 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:33.691 16:38:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:33.691 16:38:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:33.691 16:38:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:33.691 16:38:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:33.691 16:38:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.691 16:38:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.950 16:38:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.950 "name": "raid_bdev1", 00:22:33.950 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:33.950 "strip_size_kb": 0, 00:22:33.950 "state": "online", 00:22:33.950 "raid_level": "raid1", 00:22:33.950 "superblock": false, 00:22:33.950 "num_base_bdevs": 4, 00:22:33.950 "num_base_bdevs_discovered": 3, 00:22:33.950 "num_base_bdevs_operational": 3, 00:22:33.950 "base_bdevs_list": [ 00:22:33.950 { 00:22:33.950 "name": null, 00:22:33.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.950 "is_configured": false, 00:22:33.950 "data_offset": 0, 00:22:33.950 "data_size": 65536 00:22:33.950 }, 00:22:33.950 { 00:22:33.950 "name": "BaseBdev2", 00:22:33.950 "uuid": "3c09941a-1c6f-4ac6-833a-f4f8ef3dbbae", 00:22:33.950 "is_configured": true, 00:22:33.950 "data_offset": 0, 00:22:33.950 "data_size": 65536 00:22:33.950 }, 00:22:33.950 { 00:22:33.950 "name": "BaseBdev3", 00:22:33.950 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:33.950 "is_configured": true, 00:22:33.950 "data_offset": 0, 00:22:33.950 "data_size": 65536 00:22:33.950 }, 00:22:33.950 { 00:22:33.950 "name": "BaseBdev4", 00:22:33.950 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:33.950 "is_configured": true, 00:22:33.950 "data_offset": 0, 00:22:33.950 "data_size": 65536 00:22:33.950 } 00:22:33.950 ] 00:22:33.950 }' 00:22:33.950 16:38:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.950 16:38:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:33.950 16:38:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:33.950 16:38:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:33.950 16:38:10 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:34.210 [2024-07-11 16:38:10.937021] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:34.210 [2024-07-11 16:38:10.937095] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:34.210 16:38:10 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:34.210 [2024-07-11 16:38:10.992709] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:34.210 [2024-07-11 16:38:10.994429] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:34.481 [2024-07-11 16:38:11.120927] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:34.481 [2024-07-11 16:38:11.242200] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:34.481 [2024-07-11 16:38:11.242821] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:35.053 [2024-07-11 16:38:11.588910] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:35.311 [2024-07-11 16:38:11.937772] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:35.311 16:38:11 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:35.311 16:38:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:35.311 16:38:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:35.311 16:38:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:35.311 16:38:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:35.311 16:38:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.311 16:38:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.569 [2024-07-11 16:38:12.176282] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:35.569 [2024-07-11 16:38:12.177000] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:35.569 16:38:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:35.569 "name": "raid_bdev1", 00:22:35.569 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:35.569 "strip_size_kb": 0, 00:22:35.569 "state": "online", 00:22:35.569 "raid_level": "raid1", 00:22:35.569 "superblock": false, 00:22:35.569 "num_base_bdevs": 4, 00:22:35.569 "num_base_bdevs_discovered": 4, 00:22:35.569 "num_base_bdevs_operational": 4, 00:22:35.569 "process": { 00:22:35.569 "type": "rebuild", 00:22:35.569 "target": "spare", 00:22:35.569 "progress": { 00:22:35.569 "blocks": 16384, 00:22:35.569 "percent": 25 00:22:35.569 } 00:22:35.569 }, 00:22:35.569 "base_bdevs_list": [ 00:22:35.569 { 00:22:35.569 "name": "spare", 00:22:35.569 "uuid": "a34b1403-090b-5bf0-980d-b55d79d7b7da", 00:22:35.569 "is_configured": true, 00:22:35.569 "data_offset": 0, 00:22:35.569 "data_size": 65536 00:22:35.569 }, 00:22:35.569 { 00:22:35.569 "name": "BaseBdev2", 00:22:35.569 "uuid": "3c09941a-1c6f-4ac6-833a-f4f8ef3dbbae", 00:22:35.569 "is_configured": true, 00:22:35.569 "data_offset": 0, 00:22:35.569 "data_size": 65536 00:22:35.569 }, 00:22:35.569 { 00:22:35.569 "name": "BaseBdev3", 00:22:35.569 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:35.569 "is_configured": true, 00:22:35.569 "data_offset": 0, 00:22:35.569 "data_size": 65536 00:22:35.569 }, 00:22:35.569 { 00:22:35.569 "name": "BaseBdev4", 00:22:35.569 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:35.569 "is_configured": true, 00:22:35.569 "data_offset": 0, 00:22:35.569 "data_size": 65536 00:22:35.569 } 00:22:35.569 ] 00:22:35.569 }' 00:22:35.569 16:38:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:35.569 16:38:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:35.569 16:38:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:35.569 16:38:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:35.569 16:38:12 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:35.569 16:38:12 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:35.569 16:38:12 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:35.569 16:38:12 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:35.569 16:38:12 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:35.828 [2024-07-11 16:38:12.546057] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:35.828 [2024-07-11 16:38:12.562143] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:36.085 [2024-07-11 16:38:12.877376] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005930 00:22:36.085 [2024-07-11 16:38:12.877430] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ba0 00:22:36.085 [2024-07-11 16:38:12.879361] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:36.343 16:38:12 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:36.343 16:38:12 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:36.343 16:38:12 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:36.343 16:38:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:36.343 16:38:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:36.343 16:38:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:36.343 16:38:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:36.343 16:38:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.343 16:38:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.343 16:38:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:36.343 "name": "raid_bdev1", 00:22:36.343 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:36.343 "strip_size_kb": 0, 00:22:36.343 "state": "online", 00:22:36.343 "raid_level": "raid1", 00:22:36.343 "superblock": false, 00:22:36.343 "num_base_bdevs": 4, 00:22:36.343 "num_base_bdevs_discovered": 3, 00:22:36.343 "num_base_bdevs_operational": 3, 00:22:36.343 "process": { 00:22:36.343 "type": "rebuild", 00:22:36.343 "target": "spare", 00:22:36.343 "progress": { 00:22:36.343 "blocks": 26624, 00:22:36.343 "percent": 40 00:22:36.343 } 00:22:36.344 }, 00:22:36.344 "base_bdevs_list": [ 00:22:36.344 { 00:22:36.344 "name": "spare", 00:22:36.344 "uuid": "a34b1403-090b-5bf0-980d-b55d79d7b7da", 00:22:36.344 "is_configured": true, 00:22:36.344 "data_offset": 0, 00:22:36.344 "data_size": 65536 00:22:36.344 }, 00:22:36.344 { 00:22:36.344 "name": null, 00:22:36.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.344 "is_configured": false, 00:22:36.344 "data_offset": 0, 00:22:36.344 "data_size": 65536 00:22:36.344 }, 00:22:36.344 { 00:22:36.344 "name": "BaseBdev3", 00:22:36.344 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:36.344 "is_configured": true, 00:22:36.344 "data_offset": 0, 00:22:36.344 "data_size": 65536 00:22:36.344 }, 00:22:36.344 { 00:22:36.344 "name": "BaseBdev4", 00:22:36.344 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:36.344 "is_configured": true, 00:22:36.344 "data_offset": 0, 00:22:36.344 "data_size": 65536 00:22:36.344 } 00:22:36.344 ] 00:22:36.344 }' 00:22:36.344 16:38:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@657 -- # local timeout=509 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.601 16:38:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:36.859 [2024-07-11 16:38:13.455053] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:36.859 [2024-07-11 16:38:13.455551] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:36.859 16:38:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:36.859 "name": "raid_bdev1", 00:22:36.859 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:36.859 "strip_size_kb": 0, 00:22:36.859 "state": "online", 00:22:36.859 "raid_level": "raid1", 00:22:36.859 "superblock": false, 00:22:36.859 "num_base_bdevs": 4, 00:22:36.859 "num_base_bdevs_discovered": 3, 00:22:36.859 "num_base_bdevs_operational": 3, 00:22:36.859 "process": { 00:22:36.859 "type": "rebuild", 00:22:36.859 "target": "spare", 00:22:36.859 "progress": { 00:22:36.859 "blocks": 32768, 00:22:36.859 "percent": 50 00:22:36.859 } 00:22:36.859 }, 00:22:36.859 "base_bdevs_list": [ 00:22:36.859 { 00:22:36.859 "name": "spare", 00:22:36.859 "uuid": "a34b1403-090b-5bf0-980d-b55d79d7b7da", 00:22:36.859 "is_configured": true, 00:22:36.859 "data_offset": 0, 00:22:36.859 "data_size": 65536 00:22:36.859 }, 00:22:36.859 { 00:22:36.859 "name": null, 00:22:36.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.859 "is_configured": false, 00:22:36.859 "data_offset": 0, 00:22:36.859 "data_size": 65536 00:22:36.859 }, 00:22:36.859 { 00:22:36.859 "name": "BaseBdev3", 00:22:36.859 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:36.859 "is_configured": true, 00:22:36.859 "data_offset": 0, 00:22:36.859 "data_size": 65536 00:22:36.859 }, 00:22:36.859 { 00:22:36.859 "name": "BaseBdev4", 00:22:36.859 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:36.859 "is_configured": true, 00:22:36.859 "data_offset": 0, 00:22:36.859 "data_size": 65536 00:22:36.859 } 00:22:36.859 ] 00:22:36.859 }' 00:22:36.859 16:38:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:36.859 16:38:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:36.859 16:38:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:36.859 16:38:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:36.859 16:38:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:37.117 [2024-07-11 16:38:13.687745] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:37.375 [2024-07-11 16:38:14.059708] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:37.944 16:38:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:37.944 16:38:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:37.944 16:38:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:37.944 16:38:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:37.944 16:38:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:37.944 16:38:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:37.944 16:38:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.944 16:38:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.203 16:38:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:38.203 "name": "raid_bdev1", 00:22:38.203 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:38.203 "strip_size_kb": 0, 00:22:38.203 "state": "online", 00:22:38.203 "raid_level": "raid1", 00:22:38.203 "superblock": false, 00:22:38.203 "num_base_bdevs": 4, 00:22:38.203 "num_base_bdevs_discovered": 3, 00:22:38.203 "num_base_bdevs_operational": 3, 00:22:38.203 "process": { 00:22:38.203 "type": "rebuild", 00:22:38.203 "target": "spare", 00:22:38.203 "progress": { 00:22:38.203 "blocks": 53248, 00:22:38.203 "percent": 81 00:22:38.203 } 00:22:38.203 }, 00:22:38.203 "base_bdevs_list": [ 00:22:38.203 { 00:22:38.203 "name": "spare", 00:22:38.203 "uuid": "a34b1403-090b-5bf0-980d-b55d79d7b7da", 00:22:38.203 "is_configured": true, 00:22:38.203 "data_offset": 0, 00:22:38.203 "data_size": 65536 00:22:38.203 }, 00:22:38.203 { 00:22:38.203 "name": null, 00:22:38.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.203 "is_configured": false, 00:22:38.203 "data_offset": 0, 00:22:38.203 "data_size": 65536 00:22:38.203 }, 00:22:38.203 { 00:22:38.203 "name": "BaseBdev3", 00:22:38.203 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:38.203 "is_configured": true, 00:22:38.203 "data_offset": 0, 00:22:38.203 "data_size": 65536 00:22:38.203 }, 00:22:38.203 { 00:22:38.203 "name": "BaseBdev4", 00:22:38.203 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:38.203 "is_configured": true, 00:22:38.203 "data_offset": 0, 00:22:38.203 "data_size": 65536 00:22:38.203 } 00:22:38.203 ] 00:22:38.203 }' 00:22:38.203 16:38:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:38.203 16:38:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:38.203 16:38:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:38.203 16:38:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:38.203 16:38:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:38.771 [2024-07-11 16:38:15.495337] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:39.030 [2024-07-11 16:38:15.601044] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:39.030 [2024-07-11 16:38:15.602715] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.289 16:38:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:39.289 16:38:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:39.289 16:38:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:39.289 16:38:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:39.289 16:38:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:39.289 16:38:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:39.289 16:38:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.289 16:38:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:39.548 "name": "raid_bdev1", 00:22:39.548 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:39.548 "strip_size_kb": 0, 00:22:39.548 "state": "online", 00:22:39.548 "raid_level": "raid1", 00:22:39.548 "superblock": false, 00:22:39.548 "num_base_bdevs": 4, 00:22:39.548 "num_base_bdevs_discovered": 3, 00:22:39.548 "num_base_bdevs_operational": 3, 00:22:39.548 "base_bdevs_list": [ 00:22:39.548 { 00:22:39.548 "name": "spare", 00:22:39.548 "uuid": "a34b1403-090b-5bf0-980d-b55d79d7b7da", 00:22:39.548 "is_configured": true, 00:22:39.548 "data_offset": 0, 00:22:39.548 "data_size": 65536 00:22:39.548 }, 00:22:39.548 { 00:22:39.548 "name": null, 00:22:39.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.548 "is_configured": false, 00:22:39.548 "data_offset": 0, 00:22:39.548 "data_size": 65536 00:22:39.548 }, 00:22:39.548 { 00:22:39.548 "name": "BaseBdev3", 00:22:39.548 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:39.548 "is_configured": true, 00:22:39.548 "data_offset": 0, 00:22:39.548 "data_size": 65536 00:22:39.548 }, 00:22:39.548 { 00:22:39.548 "name": "BaseBdev4", 00:22:39.548 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:39.548 "is_configured": true, 00:22:39.548 "data_offset": 0, 00:22:39.548 "data_size": 65536 00:22:39.548 } 00:22:39.548 ] 00:22:39.548 }' 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@660 -- # break 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.548 16:38:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.807 16:38:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:39.807 "name": "raid_bdev1", 00:22:39.807 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:39.807 "strip_size_kb": 0, 00:22:39.807 "state": "online", 00:22:39.807 "raid_level": "raid1", 00:22:39.807 "superblock": false, 00:22:39.807 "num_base_bdevs": 4, 00:22:39.807 "num_base_bdevs_discovered": 3, 00:22:39.807 "num_base_bdevs_operational": 3, 00:22:39.807 "base_bdevs_list": [ 00:22:39.807 { 00:22:39.807 "name": "spare", 00:22:39.807 "uuid": "a34b1403-090b-5bf0-980d-b55d79d7b7da", 00:22:39.807 "is_configured": true, 00:22:39.807 "data_offset": 0, 00:22:39.807 "data_size": 65536 00:22:39.807 }, 00:22:39.807 { 00:22:39.807 "name": null, 00:22:39.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.807 "is_configured": false, 00:22:39.807 "data_offset": 0, 00:22:39.807 "data_size": 65536 00:22:39.807 }, 00:22:39.807 { 00:22:39.807 "name": "BaseBdev3", 00:22:39.807 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:39.807 "is_configured": true, 00:22:39.807 "data_offset": 0, 00:22:39.807 "data_size": 65536 00:22:39.807 }, 00:22:39.807 { 00:22:39.807 "name": "BaseBdev4", 00:22:39.807 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:39.807 "is_configured": true, 00:22:39.807 "data_offset": 0, 00:22:39.807 "data_size": 65536 00:22:39.807 } 00:22:39.807 ] 00:22:39.807 }' 00:22:39.807 16:38:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:40.067 16:38:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:40.068 16:38:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.068 16:38:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.326 16:38:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:40.326 "name": "raid_bdev1", 00:22:40.326 "uuid": "64514ab5-c4c4-46f1-bed5-2b3f15ec8f05", 00:22:40.326 "strip_size_kb": 0, 00:22:40.326 "state": "online", 00:22:40.326 "raid_level": "raid1", 00:22:40.326 "superblock": false, 00:22:40.326 "num_base_bdevs": 4, 00:22:40.326 "num_base_bdevs_discovered": 3, 00:22:40.326 "num_base_bdevs_operational": 3, 00:22:40.326 "base_bdevs_list": [ 00:22:40.326 { 00:22:40.326 "name": "spare", 00:22:40.326 "uuid": "a34b1403-090b-5bf0-980d-b55d79d7b7da", 00:22:40.326 "is_configured": true, 00:22:40.326 "data_offset": 0, 00:22:40.326 "data_size": 65536 00:22:40.326 }, 00:22:40.326 { 00:22:40.326 "name": null, 00:22:40.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.326 "is_configured": false, 00:22:40.326 "data_offset": 0, 00:22:40.326 "data_size": 65536 00:22:40.326 }, 00:22:40.326 { 00:22:40.326 "name": "BaseBdev3", 00:22:40.326 "uuid": "eb1ac575-dfc5-443e-89d3-d0efee6dff72", 00:22:40.326 "is_configured": true, 00:22:40.326 "data_offset": 0, 00:22:40.326 "data_size": 65536 00:22:40.326 }, 00:22:40.326 { 00:22:40.326 "name": "BaseBdev4", 00:22:40.326 "uuid": "315ae241-b5b3-4fe8-bcd1-798eb51c95c0", 00:22:40.326 "is_configured": true, 00:22:40.326 "data_offset": 0, 00:22:40.326 "data_size": 65536 00:22:40.326 } 00:22:40.326 ] 00:22:40.326 }' 00:22:40.326 16:38:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:40.326 16:38:16 -- common/autotest_common.sh@10 -- # set +x 00:22:40.893 16:38:17 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:41.152 [2024-07-11 16:38:17.704416] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:41.152 [2024-07-11 16:38:17.704456] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:41.152 00:22:41.152 Latency(us) 00:22:41.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.152 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:41.152 raid_bdev1 : 11.26 102.83 308.48 0.00 0.00 13709.22 305.34 124875.87 00:22:41.152 =================================================================================================================== 00:22:41.152 Total : 102.83 308.48 0.00 0.00 13709.22 305.34 124875.87 00:22:41.152 [2024-07-11 16:38:17.783009] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.152 [2024-07-11 16:38:17.783072] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:41.152 0 00:22:41.152 [2024-07-11 16:38:17.783163] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:41.152 [2024-07-11 16:38:17.783177] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:22:41.152 16:38:17 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.152 16:38:17 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:41.411 16:38:17 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:41.411 16:38:17 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:41.411 16:38:17 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:41.411 16:38:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:41.411 16:38:17 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:41.411 16:38:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:41.411 16:38:17 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:41.411 16:38:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:41.411 16:38:17 -- bdev/nbd_common.sh@12 -- # local i 00:22:41.411 16:38:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:41.411 16:38:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:41.411 16:38:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:41.411 /dev/nbd0 00:22:41.411 16:38:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:41.411 16:38:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:41.411 16:38:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:22:41.411 16:38:18 -- common/autotest_common.sh@857 -- # local i 00:22:41.411 16:38:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:41.411 16:38:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:41.411 16:38:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:22:41.411 16:38:18 -- common/autotest_common.sh@861 -- # break 00:22:41.411 16:38:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:41.412 16:38:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:41.412 16:38:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:41.412 1+0 records in 00:22:41.412 1+0 records out 00:22:41.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485548 s, 8.4 MB/s 00:22:41.671 16:38:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:41.671 16:38:18 -- common/autotest_common.sh@874 -- # size=4096 00:22:41.671 16:38:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:41.671 16:38:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:41.671 16:38:18 -- common/autotest_common.sh@877 -- # return 0 00:22:41.671 16:38:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:41.671 16:38:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:41.671 16:38:18 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:41.671 16:38:18 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:41.671 16:38:18 -- bdev/bdev_raid.sh@678 -- # continue 00:22:41.671 16:38:18 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:41.671 16:38:18 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:41.671 16:38:18 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:41.671 16:38:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:41.671 16:38:18 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:41.671 16:38:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:41.671 16:38:18 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:41.671 16:38:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:41.671 16:38:18 -- bdev/nbd_common.sh@12 -- # local i 00:22:41.671 16:38:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:41.671 16:38:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:41.671 16:38:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:41.930 /dev/nbd1 00:22:41.930 16:38:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:41.930 16:38:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:41.930 16:38:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:41.930 16:38:18 -- common/autotest_common.sh@857 -- # local i 00:22:41.930 16:38:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:41.930 16:38:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:41.930 16:38:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:41.930 16:38:18 -- common/autotest_common.sh@861 -- # break 00:22:41.930 16:38:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:41.930 16:38:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:41.930 16:38:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:41.930 1+0 records in 00:22:41.930 1+0 records out 00:22:41.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309421 s, 13.2 MB/s 00:22:41.930 16:38:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:41.930 16:38:18 -- common/autotest_common.sh@874 -- # size=4096 00:22:41.930 16:38:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:41.930 16:38:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:41.930 16:38:18 -- common/autotest_common.sh@877 -- # return 0 00:22:41.930 16:38:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:41.930 16:38:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:41.930 16:38:18 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:41.930 16:38:18 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:41.930 16:38:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:41.930 16:38:18 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:41.930 16:38:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:41.930 16:38:18 -- bdev/nbd_common.sh@51 -- # local i 00:22:41.930 16:38:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:41.930 16:38:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:42.190 16:38:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:42.190 16:38:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:42.190 16:38:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:42.190 16:38:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:42.190 16:38:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:42.190 16:38:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:42.190 16:38:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@41 -- # break 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@45 -- # return 0 00:22:42.448 16:38:19 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:42.448 16:38:19 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:42.448 16:38:19 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@12 -- # local i 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:42.448 /dev/nbd1 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:42.448 16:38:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:42.448 16:38:19 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:22:42.448 16:38:19 -- common/autotest_common.sh@857 -- # local i 00:22:42.448 16:38:19 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:22:42.448 16:38:19 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:22:42.448 16:38:19 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:22:42.448 16:38:19 -- common/autotest_common.sh@861 -- # break 00:22:42.448 16:38:19 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:22:42.448 16:38:19 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:22:42.448 16:38:19 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:42.448 1+0 records in 00:22:42.448 1+0 records out 00:22:42.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287708 s, 14.2 MB/s 00:22:42.448 16:38:19 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.707 16:38:19 -- common/autotest_common.sh@874 -- # size=4096 00:22:42.707 16:38:19 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:42.707 16:38:19 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:22:42.707 16:38:19 -- common/autotest_common.sh@877 -- # return 0 00:22:42.707 16:38:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:42.707 16:38:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:42.707 16:38:19 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:42.707 16:38:19 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:42.707 16:38:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:42.707 16:38:19 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:42.708 16:38:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:42.708 16:38:19 -- bdev/nbd_common.sh@51 -- # local i 00:22:42.708 16:38:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:42.708 16:38:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@41 -- # break 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@45 -- # return 0 00:22:42.966 16:38:19 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@51 -- # local i 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:42.966 16:38:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:43.225 16:38:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:43.225 16:38:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:43.225 16:38:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:43.226 16:38:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:43.226 16:38:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:43.226 16:38:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:43.226 16:38:19 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:43.226 16:38:19 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:43.226 16:38:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:43.226 16:38:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:43.226 16:38:20 -- bdev/nbd_common.sh@41 -- # break 00:22:43.226 16:38:20 -- bdev/nbd_common.sh@45 -- # return 0 00:22:43.226 16:38:20 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:43.226 16:38:20 -- bdev/bdev_raid.sh@709 -- # killprocess 129085 00:22:43.226 16:38:20 -- common/autotest_common.sh@926 -- # '[' -z 129085 ']' 00:22:43.226 16:38:20 -- common/autotest_common.sh@930 -- # kill -0 129085 00:22:43.226 16:38:20 -- common/autotest_common.sh@931 -- # uname 00:22:43.226 16:38:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:43.226 16:38:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129085 00:22:43.226 16:38:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:43.226 killing process with pid 129085 00:22:43.226 Received shutdown signal, test time was about 13.522007 seconds 00:22:43.226 00:22:43.226 Latency(us) 00:22:43.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:43.226 =================================================================================================================== 00:22:43.226 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:43.226 16:38:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:43.226 16:38:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129085' 00:22:43.226 16:38:20 -- common/autotest_common.sh@945 -- # kill 129085 00:22:43.226 [2024-07-11 16:38:20.028676] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:43.226 16:38:20 -- common/autotest_common.sh@950 -- # wait 129085 00:22:43.794 [2024-07-11 16:38:20.300992] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:44.730 16:38:21 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:44.730 00:22:44.730 real 0m19.007s 00:22:44.730 user 0m29.410s 00:22:44.730 sys 0m2.121s 00:22:44.730 16:38:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:44.730 16:38:21 -- common/autotest_common.sh@10 -- # set +x 00:22:44.730 ************************************ 00:22:44.730 END TEST raid_rebuild_test_io 00:22:44.730 ************************************ 00:22:44.730 16:38:21 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:22:44.730 16:38:21 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:22:44.730 16:38:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:44.730 16:38:21 -- common/autotest_common.sh@10 -- # set +x 00:22:44.730 ************************************ 00:22:44.730 START TEST raid_rebuild_test_sb_io 00:22:44.730 ************************************ 00:22:44.730 16:38:21 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:22:44.730 16:38:21 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:44.730 16:38:21 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:44.730 16:38:21 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:44.730 16:38:21 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@544 -- # raid_pid=129638 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129638 /var/tmp/spdk-raid.sock 00:22:44.731 16:38:21 -- common/autotest_common.sh@819 -- # '[' -z 129638 ']' 00:22:44.731 16:38:21 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:44.731 16:38:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:44.731 16:38:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:44.731 16:38:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:44.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:44.731 16:38:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:44.731 16:38:21 -- common/autotest_common.sh@10 -- # set +x 00:22:44.731 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:44.731 Zero copy mechanism will not be used. 00:22:44.731 [2024-07-11 16:38:21.361115] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:44.731 [2024-07-11 16:38:21.361276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129638 ] 00:22:44.731 [2024-07-11 16:38:21.504094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.989 [2024-07-11 16:38:21.665676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.249 [2024-07-11 16:38:21.827830] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:45.507 16:38:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:45.507 16:38:22 -- common/autotest_common.sh@852 -- # return 0 00:22:45.507 16:38:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:45.507 16:38:22 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:45.507 16:38:22 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:45.765 BaseBdev1_malloc 00:22:45.765 16:38:22 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:46.023 [2024-07-11 16:38:22.652641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:46.023 [2024-07-11 16:38:22.652727] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.023 [2024-07-11 16:38:22.652759] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:46.023 [2024-07-11 16:38:22.652812] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.023 [2024-07-11 16:38:22.654836] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.023 [2024-07-11 16:38:22.654878] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:46.023 BaseBdev1 00:22:46.023 16:38:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:46.023 16:38:22 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:46.023 16:38:22 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:46.281 BaseBdev2_malloc 00:22:46.281 16:38:22 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:46.538 [2024-07-11 16:38:23.111213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:46.538 [2024-07-11 16:38:23.111278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.538 [2024-07-11 16:38:23.111315] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:46.538 [2024-07-11 16:38:23.111362] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.538 [2024-07-11 16:38:23.113256] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.538 [2024-07-11 16:38:23.113297] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:46.538 BaseBdev2 00:22:46.538 16:38:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:46.538 16:38:23 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:46.538 16:38:23 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:46.538 BaseBdev3_malloc 00:22:46.538 16:38:23 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:46.795 [2024-07-11 16:38:23.520364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:46.795 [2024-07-11 16:38:23.520449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:46.795 [2024-07-11 16:38:23.520488] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:46.795 [2024-07-11 16:38:23.520529] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:46.795 [2024-07-11 16:38:23.522574] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:46.795 [2024-07-11 16:38:23.522620] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:46.795 BaseBdev3 00:22:46.795 16:38:23 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:46.795 16:38:23 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:46.795 16:38:23 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:47.053 BaseBdev4_malloc 00:22:47.053 16:38:23 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:47.311 [2024-07-11 16:38:23.905446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:47.311 [2024-07-11 16:38:23.905517] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.311 [2024-07-11 16:38:23.905550] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:47.311 [2024-07-11 16:38:23.905590] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.311 [2024-07-11 16:38:23.907473] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.311 [2024-07-11 16:38:23.907518] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:47.311 BaseBdev4 00:22:47.311 16:38:23 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:47.311 spare_malloc 00:22:47.311 16:38:24 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:47.569 spare_delay 00:22:47.569 16:38:24 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:47.828 [2024-07-11 16:38:24.525916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:47.828 [2024-07-11 16:38:24.525995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.828 [2024-07-11 16:38:24.526025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:47.828 [2024-07-11 16:38:24.526062] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.828 [2024-07-11 16:38:24.528013] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.828 [2024-07-11 16:38:24.528079] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:47.828 spare 00:22:47.828 16:38:24 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:48.087 [2024-07-11 16:38:24.718030] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:48.087 [2024-07-11 16:38:24.719631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:48.087 [2024-07-11 16:38:24.719712] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:48.087 [2024-07-11 16:38:24.719766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:48.087 [2024-07-11 16:38:24.720011] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:22:48.087 [2024-07-11 16:38:24.720033] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:48.087 [2024-07-11 16:38:24.720154] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:48.087 [2024-07-11 16:38:24.720507] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:22:48.087 [2024-07-11 16:38:24.720530] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:22:48.087 [2024-07-11 16:38:24.720666] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.087 16:38:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.346 16:38:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.346 "name": "raid_bdev1", 00:22:48.346 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:48.346 "strip_size_kb": 0, 00:22:48.346 "state": "online", 00:22:48.346 "raid_level": "raid1", 00:22:48.346 "superblock": true, 00:22:48.346 "num_base_bdevs": 4, 00:22:48.346 "num_base_bdevs_discovered": 4, 00:22:48.346 "num_base_bdevs_operational": 4, 00:22:48.346 "base_bdevs_list": [ 00:22:48.346 { 00:22:48.346 "name": "BaseBdev1", 00:22:48.346 "uuid": "a89f8fad-9e3e-533e-be3e-e816538c1153", 00:22:48.346 "is_configured": true, 00:22:48.346 "data_offset": 2048, 00:22:48.346 "data_size": 63488 00:22:48.346 }, 00:22:48.346 { 00:22:48.346 "name": "BaseBdev2", 00:22:48.346 "uuid": "80d1818b-7915-5291-9a0a-91338d69332c", 00:22:48.346 "is_configured": true, 00:22:48.346 "data_offset": 2048, 00:22:48.346 "data_size": 63488 00:22:48.346 }, 00:22:48.346 { 00:22:48.346 "name": "BaseBdev3", 00:22:48.346 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:48.346 "is_configured": true, 00:22:48.346 "data_offset": 2048, 00:22:48.346 "data_size": 63488 00:22:48.346 }, 00:22:48.346 { 00:22:48.346 "name": "BaseBdev4", 00:22:48.346 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:48.346 "is_configured": true, 00:22:48.346 "data_offset": 2048, 00:22:48.346 "data_size": 63488 00:22:48.346 } 00:22:48.346 ] 00:22:48.346 }' 00:22:48.346 16:38:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.346 16:38:24 -- common/autotest_common.sh@10 -- # set +x 00:22:48.913 16:38:25 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:48.913 16:38:25 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:49.175 [2024-07-11 16:38:25.734356] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:49.175 16:38:25 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:49.175 16:38:25 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:49.175 16:38:25 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.175 16:38:25 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:49.175 16:38:25 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:49.175 16:38:25 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:49.175 16:38:25 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:49.436 [2024-07-11 16:38:26.032563] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:49.436 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:49.436 Zero copy mechanism will not be used. 00:22:49.436 Running I/O for 60 seconds... 00:22:49.436 [2024-07-11 16:38:26.107919] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:49.436 [2024-07-11 16:38:26.114043] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.436 16:38:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.694 16:38:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:49.694 "name": "raid_bdev1", 00:22:49.694 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:49.694 "strip_size_kb": 0, 00:22:49.694 "state": "online", 00:22:49.694 "raid_level": "raid1", 00:22:49.694 "superblock": true, 00:22:49.694 "num_base_bdevs": 4, 00:22:49.694 "num_base_bdevs_discovered": 3, 00:22:49.694 "num_base_bdevs_operational": 3, 00:22:49.694 "base_bdevs_list": [ 00:22:49.694 { 00:22:49.694 "name": null, 00:22:49.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.694 "is_configured": false, 00:22:49.694 "data_offset": 2048, 00:22:49.694 "data_size": 63488 00:22:49.694 }, 00:22:49.694 { 00:22:49.694 "name": "BaseBdev2", 00:22:49.694 "uuid": "80d1818b-7915-5291-9a0a-91338d69332c", 00:22:49.694 "is_configured": true, 00:22:49.694 "data_offset": 2048, 00:22:49.694 "data_size": 63488 00:22:49.694 }, 00:22:49.694 { 00:22:49.694 "name": "BaseBdev3", 00:22:49.694 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:49.694 "is_configured": true, 00:22:49.694 "data_offset": 2048, 00:22:49.694 "data_size": 63488 00:22:49.694 }, 00:22:49.694 { 00:22:49.694 "name": "BaseBdev4", 00:22:49.694 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:49.694 "is_configured": true, 00:22:49.694 "data_offset": 2048, 00:22:49.694 "data_size": 63488 00:22:49.694 } 00:22:49.694 ] 00:22:49.694 }' 00:22:49.694 16:38:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:49.694 16:38:26 -- common/autotest_common.sh@10 -- # set +x 00:22:50.261 16:38:26 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:50.519 [2024-07-11 16:38:27.181067] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:50.519 [2024-07-11 16:38:27.181132] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:50.519 16:38:27 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:50.519 [2024-07-11 16:38:27.231729] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:50.519 [2024-07-11 16:38:27.233749] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:50.778 [2024-07-11 16:38:27.342837] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:50.778 [2024-07-11 16:38:27.343375] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:50.778 [2024-07-11 16:38:27.485593] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:50.779 [2024-07-11 16:38:27.486281] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:51.037 [2024-07-11 16:38:27.822003] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:51.297 [2024-07-11 16:38:27.937864] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:51.555 16:38:28 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:51.555 16:38:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:51.556 16:38:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:51.556 16:38:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:51.556 16:38:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:51.556 16:38:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.556 16:38:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.556 [2024-07-11 16:38:28.278623] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:51.814 16:38:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:51.814 "name": "raid_bdev1", 00:22:51.814 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:51.814 "strip_size_kb": 0, 00:22:51.814 "state": "online", 00:22:51.814 "raid_level": "raid1", 00:22:51.814 "superblock": true, 00:22:51.814 "num_base_bdevs": 4, 00:22:51.814 "num_base_bdevs_discovered": 4, 00:22:51.814 "num_base_bdevs_operational": 4, 00:22:51.814 "process": { 00:22:51.814 "type": "rebuild", 00:22:51.814 "target": "spare", 00:22:51.814 "progress": { 00:22:51.814 "blocks": 14336, 00:22:51.814 "percent": 22 00:22:51.814 } 00:22:51.814 }, 00:22:51.814 "base_bdevs_list": [ 00:22:51.814 { 00:22:51.814 "name": "spare", 00:22:51.814 "uuid": "4214f5ab-8adc-5af1-a921-73c29b0570dc", 00:22:51.814 "is_configured": true, 00:22:51.814 "data_offset": 2048, 00:22:51.814 "data_size": 63488 00:22:51.814 }, 00:22:51.814 { 00:22:51.814 "name": "BaseBdev2", 00:22:51.814 "uuid": "80d1818b-7915-5291-9a0a-91338d69332c", 00:22:51.814 "is_configured": true, 00:22:51.814 "data_offset": 2048, 00:22:51.814 "data_size": 63488 00:22:51.814 }, 00:22:51.814 { 00:22:51.814 "name": "BaseBdev3", 00:22:51.814 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:51.814 "is_configured": true, 00:22:51.814 "data_offset": 2048, 00:22:51.814 "data_size": 63488 00:22:51.814 }, 00:22:51.814 { 00:22:51.814 "name": "BaseBdev4", 00:22:51.814 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:51.814 "is_configured": true, 00:22:51.814 "data_offset": 2048, 00:22:51.814 "data_size": 63488 00:22:51.814 } 00:22:51.814 ] 00:22:51.814 }' 00:22:51.814 16:38:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:51.814 [2024-07-11 16:38:28.489486] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:51.815 [2024-07-11 16:38:28.490137] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:51.815 16:38:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:51.815 16:38:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:51.815 16:38:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:51.815 16:38:28 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:52.073 [2024-07-11 16:38:28.805358] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:52.073 [2024-07-11 16:38:28.839668] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:52.332 [2024-07-11 16:38:28.939763] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:52.332 [2024-07-11 16:38:28.948498] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.332 [2024-07-11 16:38:28.966696] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.332 16:38:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:52.590 16:38:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:52.590 "name": "raid_bdev1", 00:22:52.590 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:52.590 "strip_size_kb": 0, 00:22:52.590 "state": "online", 00:22:52.590 "raid_level": "raid1", 00:22:52.590 "superblock": true, 00:22:52.590 "num_base_bdevs": 4, 00:22:52.590 "num_base_bdevs_discovered": 3, 00:22:52.590 "num_base_bdevs_operational": 3, 00:22:52.590 "base_bdevs_list": [ 00:22:52.590 { 00:22:52.590 "name": null, 00:22:52.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:52.590 "is_configured": false, 00:22:52.590 "data_offset": 2048, 00:22:52.590 "data_size": 63488 00:22:52.590 }, 00:22:52.590 { 00:22:52.590 "name": "BaseBdev2", 00:22:52.590 "uuid": "80d1818b-7915-5291-9a0a-91338d69332c", 00:22:52.590 "is_configured": true, 00:22:52.590 "data_offset": 2048, 00:22:52.590 "data_size": 63488 00:22:52.590 }, 00:22:52.590 { 00:22:52.590 "name": "BaseBdev3", 00:22:52.590 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:52.590 "is_configured": true, 00:22:52.590 "data_offset": 2048, 00:22:52.590 "data_size": 63488 00:22:52.590 }, 00:22:52.590 { 00:22:52.590 "name": "BaseBdev4", 00:22:52.590 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:52.590 "is_configured": true, 00:22:52.590 "data_offset": 2048, 00:22:52.590 "data_size": 63488 00:22:52.590 } 00:22:52.590 ] 00:22:52.590 }' 00:22:52.590 16:38:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:52.590 16:38:29 -- common/autotest_common.sh@10 -- # set +x 00:22:53.156 16:38:29 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:53.156 16:38:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:53.156 16:38:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:53.156 16:38:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:53.156 16:38:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:53.156 16:38:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.156 16:38:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.414 16:38:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:53.414 "name": "raid_bdev1", 00:22:53.414 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:53.414 "strip_size_kb": 0, 00:22:53.414 "state": "online", 00:22:53.414 "raid_level": "raid1", 00:22:53.414 "superblock": true, 00:22:53.414 "num_base_bdevs": 4, 00:22:53.414 "num_base_bdevs_discovered": 3, 00:22:53.414 "num_base_bdevs_operational": 3, 00:22:53.414 "base_bdevs_list": [ 00:22:53.414 { 00:22:53.414 "name": null, 00:22:53.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:53.414 "is_configured": false, 00:22:53.415 "data_offset": 2048, 00:22:53.415 "data_size": 63488 00:22:53.415 }, 00:22:53.415 { 00:22:53.415 "name": "BaseBdev2", 00:22:53.415 "uuid": "80d1818b-7915-5291-9a0a-91338d69332c", 00:22:53.415 "is_configured": true, 00:22:53.415 "data_offset": 2048, 00:22:53.415 "data_size": 63488 00:22:53.415 }, 00:22:53.415 { 00:22:53.415 "name": "BaseBdev3", 00:22:53.415 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:53.415 "is_configured": true, 00:22:53.415 "data_offset": 2048, 00:22:53.415 "data_size": 63488 00:22:53.415 }, 00:22:53.415 { 00:22:53.415 "name": "BaseBdev4", 00:22:53.415 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:53.415 "is_configured": true, 00:22:53.415 "data_offset": 2048, 00:22:53.415 "data_size": 63488 00:22:53.415 } 00:22:53.415 ] 00:22:53.415 }' 00:22:53.415 16:38:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:53.415 16:38:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:53.415 16:38:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:53.415 16:38:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:53.415 16:38:30 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:53.672 [2024-07-11 16:38:30.430949] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:53.672 [2024-07-11 16:38:30.431003] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.930 16:38:30 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:53.930 [2024-07-11 16:38:30.488130] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:53.930 [2024-07-11 16:38:30.490042] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:53.930 [2024-07-11 16:38:30.615279] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:53.930 [2024-07-11 16:38:30.616418] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:54.188 [2024-07-11 16:38:30.826361] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:54.188 [2024-07-11 16:38:30.826686] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:54.446 [2024-07-11 16:38:31.067227] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:54.446 [2024-07-11 16:38:31.067715] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:54.446 [2024-07-11 16:38:31.183805] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:54.704 [2024-07-11 16:38:31.438176] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:54.704 16:38:31 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:54.704 16:38:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.704 16:38:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:54.704 16:38:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:54.704 16:38:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.704 16:38:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.704 16:38:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.962 16:38:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:54.962 "name": "raid_bdev1", 00:22:54.962 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:54.962 "strip_size_kb": 0, 00:22:54.962 "state": "online", 00:22:54.962 "raid_level": "raid1", 00:22:54.962 "superblock": true, 00:22:54.962 "num_base_bdevs": 4, 00:22:54.962 "num_base_bdevs_discovered": 4, 00:22:54.962 "num_base_bdevs_operational": 4, 00:22:54.962 "process": { 00:22:54.962 "type": "rebuild", 00:22:54.962 "target": "spare", 00:22:54.962 "progress": { 00:22:54.962 "blocks": 18432, 00:22:54.962 "percent": 29 00:22:54.962 } 00:22:54.962 }, 00:22:54.962 "base_bdevs_list": [ 00:22:54.962 { 00:22:54.962 "name": "spare", 00:22:54.962 "uuid": "4214f5ab-8adc-5af1-a921-73c29b0570dc", 00:22:54.962 "is_configured": true, 00:22:54.962 "data_offset": 2048, 00:22:54.962 "data_size": 63488 00:22:54.962 }, 00:22:54.962 { 00:22:54.962 "name": "BaseBdev2", 00:22:54.962 "uuid": "80d1818b-7915-5291-9a0a-91338d69332c", 00:22:54.962 "is_configured": true, 00:22:54.962 "data_offset": 2048, 00:22:54.962 "data_size": 63488 00:22:54.962 }, 00:22:54.962 { 00:22:54.962 "name": "BaseBdev3", 00:22:54.962 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:54.962 "is_configured": true, 00:22:54.962 "data_offset": 2048, 00:22:54.962 "data_size": 63488 00:22:54.962 }, 00:22:54.962 { 00:22:54.962 "name": "BaseBdev4", 00:22:54.962 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:54.962 "is_configured": true, 00:22:54.962 "data_offset": 2048, 00:22:54.962 "data_size": 63488 00:22:54.962 } 00:22:54.962 ] 00:22:54.962 }' 00:22:54.962 16:38:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.962 16:38:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:54.962 16:38:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:55.221 [2024-07-11 16:38:31.786814] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:55.221 16:38:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.221 16:38:31 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:55.221 16:38:31 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:55.221 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:55.221 16:38:31 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:55.221 16:38:31 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:55.221 16:38:31 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:55.221 16:38:31 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:55.221 [2024-07-11 16:38:31.894512] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:55.221 [2024-07-11 16:38:31.895213] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:55.221 [2024-07-11 16:38:32.005200] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:55.480 [2024-07-11 16:38:32.225577] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:22:55.480 [2024-07-11 16:38:32.225619] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:22:55.738 [2024-07-11 16:38:32.348671] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:55.738 16:38:32 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:55.738 16:38:32 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:55.738 16:38:32 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.738 16:38:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:55.738 16:38:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:55.738 16:38:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:55.738 16:38:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:55.738 16:38:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.738 16:38:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:55.997 "name": "raid_bdev1", 00:22:55.997 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:55.997 "strip_size_kb": 0, 00:22:55.997 "state": "online", 00:22:55.997 "raid_level": "raid1", 00:22:55.997 "superblock": true, 00:22:55.997 "num_base_bdevs": 4, 00:22:55.997 "num_base_bdevs_discovered": 3, 00:22:55.997 "num_base_bdevs_operational": 3, 00:22:55.997 "process": { 00:22:55.997 "type": "rebuild", 00:22:55.997 "target": "spare", 00:22:55.997 "progress": { 00:22:55.997 "blocks": 28672, 00:22:55.997 "percent": 45 00:22:55.997 } 00:22:55.997 }, 00:22:55.997 "base_bdevs_list": [ 00:22:55.997 { 00:22:55.997 "name": "spare", 00:22:55.997 "uuid": "4214f5ab-8adc-5af1-a921-73c29b0570dc", 00:22:55.997 "is_configured": true, 00:22:55.997 "data_offset": 2048, 00:22:55.997 "data_size": 63488 00:22:55.997 }, 00:22:55.997 { 00:22:55.997 "name": null, 00:22:55.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.997 "is_configured": false, 00:22:55.997 "data_offset": 2048, 00:22:55.997 "data_size": 63488 00:22:55.997 }, 00:22:55.997 { 00:22:55.997 "name": "BaseBdev3", 00:22:55.997 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:55.997 "is_configured": true, 00:22:55.997 "data_offset": 2048, 00:22:55.997 "data_size": 63488 00:22:55.997 }, 00:22:55.997 { 00:22:55.997 "name": "BaseBdev4", 00:22:55.997 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:55.997 "is_configured": true, 00:22:55.997 "data_offset": 2048, 00:22:55.997 "data_size": 63488 00:22:55.997 } 00:22:55.997 ] 00:22:55.997 }' 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@657 -- # local timeout=528 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.997 16:38:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.997 [2024-07-11 16:38:32.709310] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:55.997 [2024-07-11 16:38:32.710281] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:56.256 [2024-07-11 16:38:32.935873] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:22:56.256 16:38:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:56.256 "name": "raid_bdev1", 00:22:56.256 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:56.256 "strip_size_kb": 0, 00:22:56.256 "state": "online", 00:22:56.256 "raid_level": "raid1", 00:22:56.256 "superblock": true, 00:22:56.256 "num_base_bdevs": 4, 00:22:56.256 "num_base_bdevs_discovered": 3, 00:22:56.256 "num_base_bdevs_operational": 3, 00:22:56.256 "process": { 00:22:56.256 "type": "rebuild", 00:22:56.256 "target": "spare", 00:22:56.256 "progress": { 00:22:56.256 "blocks": 32768, 00:22:56.256 "percent": 51 00:22:56.256 } 00:22:56.256 }, 00:22:56.256 "base_bdevs_list": [ 00:22:56.256 { 00:22:56.256 "name": "spare", 00:22:56.256 "uuid": "4214f5ab-8adc-5af1-a921-73c29b0570dc", 00:22:56.256 "is_configured": true, 00:22:56.256 "data_offset": 2048, 00:22:56.256 "data_size": 63488 00:22:56.256 }, 00:22:56.256 { 00:22:56.256 "name": null, 00:22:56.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:56.256 "is_configured": false, 00:22:56.256 "data_offset": 2048, 00:22:56.256 "data_size": 63488 00:22:56.256 }, 00:22:56.256 { 00:22:56.256 "name": "BaseBdev3", 00:22:56.256 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:56.256 "is_configured": true, 00:22:56.256 "data_offset": 2048, 00:22:56.256 "data_size": 63488 00:22:56.256 }, 00:22:56.256 { 00:22:56.256 "name": "BaseBdev4", 00:22:56.256 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:56.256 "is_configured": true, 00:22:56.256 "data_offset": 2048, 00:22:56.256 "data_size": 63488 00:22:56.256 } 00:22:56.256 ] 00:22:56.256 }' 00:22:56.256 16:38:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:56.256 16:38:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:56.256 16:38:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:56.256 16:38:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:56.256 16:38:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:56.514 [2024-07-11 16:38:33.270178] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:56.772 [2024-07-11 16:38:33.490551] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:57.030 [2024-07-11 16:38:33.812233] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:57.288 [2024-07-11 16:38:34.037923] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:57.288 16:38:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:57.288 16:38:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.288 16:38:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:57.288 16:38:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:57.288 16:38:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:57.288 16:38:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:57.288 16:38:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.288 16:38:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.546 16:38:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:57.546 "name": "raid_bdev1", 00:22:57.546 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:57.546 "strip_size_kb": 0, 00:22:57.546 "state": "online", 00:22:57.546 "raid_level": "raid1", 00:22:57.546 "superblock": true, 00:22:57.546 "num_base_bdevs": 4, 00:22:57.546 "num_base_bdevs_discovered": 3, 00:22:57.546 "num_base_bdevs_operational": 3, 00:22:57.546 "process": { 00:22:57.546 "type": "rebuild", 00:22:57.546 "target": "spare", 00:22:57.546 "progress": { 00:22:57.546 "blocks": 49152, 00:22:57.546 "percent": 77 00:22:57.546 } 00:22:57.546 }, 00:22:57.546 "base_bdevs_list": [ 00:22:57.546 { 00:22:57.546 "name": "spare", 00:22:57.546 "uuid": "4214f5ab-8adc-5af1-a921-73c29b0570dc", 00:22:57.546 "is_configured": true, 00:22:57.546 "data_offset": 2048, 00:22:57.546 "data_size": 63488 00:22:57.546 }, 00:22:57.546 { 00:22:57.546 "name": null, 00:22:57.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.546 "is_configured": false, 00:22:57.546 "data_offset": 2048, 00:22:57.546 "data_size": 63488 00:22:57.546 }, 00:22:57.546 { 00:22:57.546 "name": "BaseBdev3", 00:22:57.546 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:57.546 "is_configured": true, 00:22:57.546 "data_offset": 2048, 00:22:57.546 "data_size": 63488 00:22:57.546 }, 00:22:57.546 { 00:22:57.546 "name": "BaseBdev4", 00:22:57.546 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:57.546 "is_configured": true, 00:22:57.546 "data_offset": 2048, 00:22:57.546 "data_size": 63488 00:22:57.546 } 00:22:57.546 ] 00:22:57.546 }' 00:22:57.546 16:38:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:57.805 16:38:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.805 16:38:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:57.805 16:38:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.805 16:38:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:58.063 [2024-07-11 16:38:34.686493] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:22:58.322 [2024-07-11 16:38:35.118154] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:58.580 [2024-07-11 16:38:35.218160] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:58.580 [2024-07-11 16:38:35.220234] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:58.840 16:38:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:58.840 16:38:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:58.840 16:38:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:58.840 16:38:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:58.840 16:38:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:58.840 16:38:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:58.840 16:38:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.840 16:38:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:59.099 "name": "raid_bdev1", 00:22:59.099 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:59.099 "strip_size_kb": 0, 00:22:59.099 "state": "online", 00:22:59.099 "raid_level": "raid1", 00:22:59.099 "superblock": true, 00:22:59.099 "num_base_bdevs": 4, 00:22:59.099 "num_base_bdevs_discovered": 3, 00:22:59.099 "num_base_bdevs_operational": 3, 00:22:59.099 "base_bdevs_list": [ 00:22:59.099 { 00:22:59.099 "name": "spare", 00:22:59.099 "uuid": "4214f5ab-8adc-5af1-a921-73c29b0570dc", 00:22:59.099 "is_configured": true, 00:22:59.099 "data_offset": 2048, 00:22:59.099 "data_size": 63488 00:22:59.099 }, 00:22:59.099 { 00:22:59.099 "name": null, 00:22:59.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.099 "is_configured": false, 00:22:59.099 "data_offset": 2048, 00:22:59.099 "data_size": 63488 00:22:59.099 }, 00:22:59.099 { 00:22:59.099 "name": "BaseBdev3", 00:22:59.099 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:59.099 "is_configured": true, 00:22:59.099 "data_offset": 2048, 00:22:59.099 "data_size": 63488 00:22:59.099 }, 00:22:59.099 { 00:22:59.099 "name": "BaseBdev4", 00:22:59.099 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:59.099 "is_configured": true, 00:22:59.099 "data_offset": 2048, 00:22:59.099 "data_size": 63488 00:22:59.099 } 00:22:59.099 ] 00:22:59.099 }' 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@660 -- # break 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.099 16:38:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.358 16:38:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:59.358 "name": "raid_bdev1", 00:22:59.358 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:59.358 "strip_size_kb": 0, 00:22:59.358 "state": "online", 00:22:59.358 "raid_level": "raid1", 00:22:59.358 "superblock": true, 00:22:59.358 "num_base_bdevs": 4, 00:22:59.358 "num_base_bdevs_discovered": 3, 00:22:59.358 "num_base_bdevs_operational": 3, 00:22:59.358 "base_bdevs_list": [ 00:22:59.358 { 00:22:59.358 "name": "spare", 00:22:59.358 "uuid": "4214f5ab-8adc-5af1-a921-73c29b0570dc", 00:22:59.358 "is_configured": true, 00:22:59.358 "data_offset": 2048, 00:22:59.358 "data_size": 63488 00:22:59.358 }, 00:22:59.358 { 00:22:59.358 "name": null, 00:22:59.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.358 "is_configured": false, 00:22:59.358 "data_offset": 2048, 00:22:59.358 "data_size": 63488 00:22:59.358 }, 00:22:59.358 { 00:22:59.358 "name": "BaseBdev3", 00:22:59.358 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:59.358 "is_configured": true, 00:22:59.358 "data_offset": 2048, 00:22:59.358 "data_size": 63488 00:22:59.358 }, 00:22:59.358 { 00:22:59.358 "name": "BaseBdev4", 00:22:59.358 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:59.358 "is_configured": true, 00:22:59.358 "data_offset": 2048, 00:22:59.358 "data_size": 63488 00:22:59.358 } 00:22:59.358 ] 00:22:59.358 }' 00:22:59.358 16:38:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:59.358 16:38:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:59.358 16:38:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.618 16:38:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.877 16:38:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:59.877 "name": "raid_bdev1", 00:22:59.877 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:22:59.877 "strip_size_kb": 0, 00:22:59.877 "state": "online", 00:22:59.877 "raid_level": "raid1", 00:22:59.877 "superblock": true, 00:22:59.877 "num_base_bdevs": 4, 00:22:59.877 "num_base_bdevs_discovered": 3, 00:22:59.877 "num_base_bdevs_operational": 3, 00:22:59.877 "base_bdevs_list": [ 00:22:59.877 { 00:22:59.877 "name": "spare", 00:22:59.877 "uuid": "4214f5ab-8adc-5af1-a921-73c29b0570dc", 00:22:59.877 "is_configured": true, 00:22:59.877 "data_offset": 2048, 00:22:59.877 "data_size": 63488 00:22:59.877 }, 00:22:59.877 { 00:22:59.877 "name": null, 00:22:59.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.877 "is_configured": false, 00:22:59.877 "data_offset": 2048, 00:22:59.877 "data_size": 63488 00:22:59.877 }, 00:22:59.877 { 00:22:59.877 "name": "BaseBdev3", 00:22:59.877 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:22:59.877 "is_configured": true, 00:22:59.877 "data_offset": 2048, 00:22:59.877 "data_size": 63488 00:22:59.877 }, 00:22:59.877 { 00:22:59.877 "name": "BaseBdev4", 00:22:59.877 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:22:59.877 "is_configured": true, 00:22:59.877 "data_offset": 2048, 00:22:59.877 "data_size": 63488 00:22:59.877 } 00:22:59.877 ] 00:22:59.877 }' 00:22:59.877 16:38:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:59.877 16:38:36 -- common/autotest_common.sh@10 -- # set +x 00:23:00.446 16:38:37 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:00.705 [2024-07-11 16:38:37.254730] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:00.705 [2024-07-11 16:38:37.254973] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:00.705 00:23:00.705 Latency(us) 00:23:00.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.705 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:23:00.705 raid_bdev1 : 11.26 102.61 307.83 0.00 0.00 13218.63 303.48 112960.23 00:23:00.705 =================================================================================================================== 00:23:00.705 Total : 102.61 307.83 0.00 0.00 13218.63 303.48 112960.23 00:23:00.705 [2024-07-11 16:38:37.305359] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:00.705 [2024-07-11 16:38:37.305536] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:00.705 [2024-07-11 16:38:37.305673] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:00.705 0 00:23:00.705 [2024-07-11 16:38:37.305874] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:23:00.705 16:38:37 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:00.705 16:38:37 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.964 16:38:37 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:00.964 16:38:37 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:23:00.964 16:38:37 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:23:00.964 16:38:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:00.964 16:38:37 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:00.964 16:38:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:00.964 16:38:37 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:00.964 16:38:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:00.964 16:38:37 -- bdev/nbd_common.sh@12 -- # local i 00:23:00.964 16:38:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:00.964 16:38:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:00.964 16:38:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:23:01.223 /dev/nbd0 00:23:01.223 16:38:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:01.223 16:38:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:01.223 16:38:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:01.223 16:38:37 -- common/autotest_common.sh@857 -- # local i 00:23:01.223 16:38:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:01.223 16:38:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:01.223 16:38:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:01.223 16:38:37 -- common/autotest_common.sh@861 -- # break 00:23:01.223 16:38:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:01.223 16:38:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:01.223 16:38:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:01.223 1+0 records in 00:23:01.223 1+0 records out 00:23:01.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479931 s, 8.5 MB/s 00:23:01.224 16:38:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.224 16:38:37 -- common/autotest_common.sh@874 -- # size=4096 00:23:01.224 16:38:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.224 16:38:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:01.224 16:38:37 -- common/autotest_common.sh@877 -- # return 0 00:23:01.224 16:38:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:01.224 16:38:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:01.224 16:38:37 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:01.224 16:38:37 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:23:01.224 16:38:37 -- bdev/bdev_raid.sh@678 -- # continue 00:23:01.224 16:38:37 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:01.224 16:38:37 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:23:01.224 16:38:37 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:23:01.224 16:38:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:01.224 16:38:37 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:01.224 16:38:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:01.224 16:38:37 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:01.224 16:38:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:01.224 16:38:37 -- bdev/nbd_common.sh@12 -- # local i 00:23:01.224 16:38:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:01.224 16:38:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:01.224 16:38:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:23:01.483 /dev/nbd1 00:23:01.483 16:38:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:01.483 16:38:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:01.483 16:38:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:01.483 16:38:38 -- common/autotest_common.sh@857 -- # local i 00:23:01.483 16:38:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:01.483 16:38:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:01.483 16:38:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:01.483 16:38:38 -- common/autotest_common.sh@861 -- # break 00:23:01.483 16:38:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:01.483 16:38:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:01.483 16:38:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:01.483 1+0 records in 00:23:01.483 1+0 records out 00:23:01.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371084 s, 11.0 MB/s 00:23:01.483 16:38:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.483 16:38:38 -- common/autotest_common.sh@874 -- # size=4096 00:23:01.483 16:38:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:01.483 16:38:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:01.483 16:38:38 -- common/autotest_common.sh@877 -- # return 0 00:23:01.483 16:38:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:01.483 16:38:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:01.483 16:38:38 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:01.483 16:38:38 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:01.483 16:38:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:01.483 16:38:38 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:01.483 16:38:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:01.483 16:38:38 -- bdev/nbd_common.sh@51 -- # local i 00:23:01.483 16:38:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:01.483 16:38:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:01.742 16:38:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:01.742 16:38:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:01.742 16:38:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:01.742 16:38:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@41 -- # break 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@45 -- # return 0 00:23:01.743 16:38:38 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:23:01.743 16:38:38 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:23:01.743 16:38:38 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@12 -- # local i 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:01.743 16:38:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:23:02.044 /dev/nbd1 00:23:02.044 16:38:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:02.044 16:38:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:02.044 16:38:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:23:02.044 16:38:38 -- common/autotest_common.sh@857 -- # local i 00:23:02.044 16:38:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:02.044 16:38:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:02.044 16:38:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:23:02.044 16:38:38 -- common/autotest_common.sh@861 -- # break 00:23:02.044 16:38:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:02.044 16:38:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:02.044 16:38:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:02.044 1+0 records in 00:23:02.044 1+0 records out 00:23:02.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050231 s, 8.2 MB/s 00:23:02.044 16:38:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.044 16:38:38 -- common/autotest_common.sh@874 -- # size=4096 00:23:02.044 16:38:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:02.044 16:38:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:02.044 16:38:38 -- common/autotest_common.sh@877 -- # return 0 00:23:02.044 16:38:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:02.044 16:38:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:02.044 16:38:38 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:23:02.044 16:38:38 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:23:02.044 16:38:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:02.044 16:38:38 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:02.044 16:38:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:02.044 16:38:38 -- bdev/nbd_common.sh@51 -- # local i 00:23:02.044 16:38:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.044 16:38:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@41 -- # break 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@45 -- # return 0 00:23:02.324 16:38:39 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@51 -- # local i 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:02.324 16:38:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@41 -- # break 00:23:02.893 16:38:39 -- bdev/nbd_common.sh@45 -- # return 0 00:23:02.893 16:38:39 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:23:02.893 16:38:39 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:02.893 16:38:39 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:23:02.893 16:38:39 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:23:02.893 16:38:39 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:03.152 [2024-07-11 16:38:39.920340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:03.152 [2024-07-11 16:38:39.920558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.152 [2024-07-11 16:38:39.920635] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:23:03.152 [2024-07-11 16:38:39.920871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.152 [2024-07-11 16:38:39.923052] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.152 [2024-07-11 16:38:39.923244] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:03.152 [2024-07-11 16:38:39.923463] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:03.152 [2024-07-11 16:38:39.923627] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:03.152 BaseBdev1 00:23:03.152 16:38:39 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:03.152 16:38:39 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:23:03.152 16:38:39 -- bdev/bdev_raid.sh@696 -- # continue 00:23:03.152 16:38:39 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:03.152 16:38:39 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:23:03.152 16:38:39 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:23:03.411 16:38:40 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:03.670 [2024-07-11 16:38:40.356467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:03.670 [2024-07-11 16:38:40.356698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.670 [2024-07-11 16:38:40.356781] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:03.670 [2024-07-11 16:38:40.357058] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.670 [2024-07-11 16:38:40.357531] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.670 [2024-07-11 16:38:40.357590] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:03.670 [2024-07-11 16:38:40.357697] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:23:03.670 [2024-07-11 16:38:40.357712] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:23:03.670 [2024-07-11 16:38:40.357719] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:03.670 [2024-07-11 16:38:40.357744] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state configuring 00:23:03.670 [2024-07-11 16:38:40.357812] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:03.670 BaseBdev3 00:23:03.670 16:38:40 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:23:03.670 16:38:40 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:23:03.670 16:38:40 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:23:03.930 16:38:40 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:23:03.930 [2024-07-11 16:38:40.716565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:23:03.930 [2024-07-11 16:38:40.716787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:03.930 [2024-07-11 16:38:40.716852] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:23:03.930 [2024-07-11 16:38:40.717126] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:03.930 [2024-07-11 16:38:40.717549] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:03.930 [2024-07-11 16:38:40.717749] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:03.930 [2024-07-11 16:38:40.717950] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:23:03.930 [2024-07-11 16:38:40.718068] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:03.930 BaseBdev4 00:23:03.930 16:38:40 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:23:04.189 16:38:40 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:04.448 [2024-07-11 16:38:41.080691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:04.448 [2024-07-11 16:38:41.080884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:04.448 [2024-07-11 16:38:41.080972] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:23:04.448 [2024-07-11 16:38:41.081111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:04.448 [2024-07-11 16:38:41.081577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:04.448 [2024-07-11 16:38:41.081734] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:04.448 [2024-07-11 16:38:41.081914] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:23:04.448 [2024-07-11 16:38:41.082031] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:04.448 spare 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.448 16:38:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.448 [2024-07-11 16:38:41.182168] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c680 00:23:04.448 [2024-07-11 16:38:41.182306] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:23:04.448 [2024-07-11 16:38:41.182468] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000039c70 00:23:04.448 [2024-07-11 16:38:41.182978] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c680 00:23:04.448 [2024-07-11 16:38:41.183098] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c680 00:23:04.448 [2024-07-11 16:38:41.183315] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.706 16:38:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:04.706 "name": "raid_bdev1", 00:23:04.706 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:23:04.706 "strip_size_kb": 0, 00:23:04.706 "state": "online", 00:23:04.706 "raid_level": "raid1", 00:23:04.706 "superblock": true, 00:23:04.706 "num_base_bdevs": 4, 00:23:04.706 "num_base_bdevs_discovered": 3, 00:23:04.706 "num_base_bdevs_operational": 3, 00:23:04.706 "base_bdevs_list": [ 00:23:04.706 { 00:23:04.706 "name": "spare", 00:23:04.706 "uuid": "4214f5ab-8adc-5af1-a921-73c29b0570dc", 00:23:04.706 "is_configured": true, 00:23:04.706 "data_offset": 2048, 00:23:04.706 "data_size": 63488 00:23:04.706 }, 00:23:04.706 { 00:23:04.706 "name": null, 00:23:04.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.706 "is_configured": false, 00:23:04.706 "data_offset": 2048, 00:23:04.706 "data_size": 63488 00:23:04.706 }, 00:23:04.706 { 00:23:04.706 "name": "BaseBdev3", 00:23:04.706 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:23:04.706 "is_configured": true, 00:23:04.706 "data_offset": 2048, 00:23:04.706 "data_size": 63488 00:23:04.706 }, 00:23:04.706 { 00:23:04.706 "name": "BaseBdev4", 00:23:04.706 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:23:04.706 "is_configured": true, 00:23:04.706 "data_offset": 2048, 00:23:04.706 "data_size": 63488 00:23:04.706 } 00:23:04.706 ] 00:23:04.706 }' 00:23:04.706 16:38:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:04.706 16:38:41 -- common/autotest_common.sh@10 -- # set +x 00:23:05.273 16:38:41 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:05.273 16:38:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:05.273 16:38:41 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:05.273 16:38:41 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:05.273 16:38:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:05.273 16:38:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.273 16:38:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:05.532 16:38:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:05.532 "name": "raid_bdev1", 00:23:05.532 "uuid": "0258342d-f32c-4afb-a1d5-1c2c1bce0c62", 00:23:05.532 "strip_size_kb": 0, 00:23:05.532 "state": "online", 00:23:05.532 "raid_level": "raid1", 00:23:05.532 "superblock": true, 00:23:05.532 "num_base_bdevs": 4, 00:23:05.532 "num_base_bdevs_discovered": 3, 00:23:05.532 "num_base_bdevs_operational": 3, 00:23:05.532 "base_bdevs_list": [ 00:23:05.532 { 00:23:05.532 "name": "spare", 00:23:05.532 "uuid": "4214f5ab-8adc-5af1-a921-73c29b0570dc", 00:23:05.532 "is_configured": true, 00:23:05.532 "data_offset": 2048, 00:23:05.532 "data_size": 63488 00:23:05.532 }, 00:23:05.532 { 00:23:05.532 "name": null, 00:23:05.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:05.532 "is_configured": false, 00:23:05.532 "data_offset": 2048, 00:23:05.532 "data_size": 63488 00:23:05.532 }, 00:23:05.532 { 00:23:05.532 "name": "BaseBdev3", 00:23:05.532 "uuid": "98a1a96f-b8b8-5005-a85d-16d6a023ac99", 00:23:05.532 "is_configured": true, 00:23:05.532 "data_offset": 2048, 00:23:05.532 "data_size": 63488 00:23:05.532 }, 00:23:05.532 { 00:23:05.532 "name": "BaseBdev4", 00:23:05.532 "uuid": "d5cac2be-cba9-5a4f-baa1-e6ef21cb56fd", 00:23:05.532 "is_configured": true, 00:23:05.532 "data_offset": 2048, 00:23:05.532 "data_size": 63488 00:23:05.532 } 00:23:05.532 ] 00:23:05.532 }' 00:23:05.532 16:38:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:05.532 16:38:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:05.532 16:38:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:05.532 16:38:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:05.532 16:38:42 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.532 16:38:42 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:05.790 16:38:42 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:23:05.790 16:38:42 -- bdev/bdev_raid.sh@709 -- # killprocess 129638 00:23:05.790 16:38:42 -- common/autotest_common.sh@926 -- # '[' -z 129638 ']' 00:23:05.790 16:38:42 -- common/autotest_common.sh@930 -- # kill -0 129638 00:23:05.790 16:38:42 -- common/autotest_common.sh@931 -- # uname 00:23:05.790 16:38:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:05.790 16:38:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 129638 00:23:05.790 killing process with pid 129638 00:23:05.790 Received shutdown signal, test time was about 16.432541 seconds 00:23:05.790 00:23:05.790 Latency(us) 00:23:05.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.790 =================================================================================================================== 00:23:05.790 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.790 16:38:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:05.790 16:38:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:05.790 16:38:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 129638' 00:23:05.790 16:38:42 -- common/autotest_common.sh@945 -- # kill 129638 00:23:05.790 16:38:42 -- common/autotest_common.sh@950 -- # wait 129638 00:23:05.790 [2024-07-11 16:38:42.466983] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:05.790 [2024-07-11 16:38:42.467059] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.790 [2024-07-11 16:38:42.467169] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:05.790 [2024-07-11 16:38:42.467288] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c680 name raid_bdev1, state offline 00:23:06.048 [2024-07-11 16:38:42.746336] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:06.980 ************************************ 00:23:06.980 END TEST raid_rebuild_test_sb_io 00:23:06.980 ************************************ 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:06.980 00:23:06.980 real 0m22.389s 00:23:06.980 user 0m36.155s 00:23:06.980 sys 0m2.335s 00:23:06.980 16:38:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:06.980 16:38:43 -- common/autotest_common.sh@10 -- # set +x 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:23:06.980 16:38:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:06.980 16:38:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:06.980 16:38:43 -- common/autotest_common.sh@10 -- # set +x 00:23:06.980 ************************************ 00:23:06.980 START TEST raid5f_state_function_test 00:23:06.980 ************************************ 00:23:06.980 16:38:43 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 false 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=130278 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130278' 00:23:06.980 Process raid pid: 130278 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:06.980 16:38:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130278 /var/tmp/spdk-raid.sock 00:23:06.980 16:38:43 -- common/autotest_common.sh@819 -- # '[' -z 130278 ']' 00:23:06.980 16:38:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:06.980 16:38:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:06.980 16:38:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:06.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:06.980 16:38:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:06.980 16:38:43 -- common/autotest_common.sh@10 -- # set +x 00:23:07.239 [2024-07-11 16:38:43.826646] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:07.239 [2024-07-11 16:38:43.827611] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.239 [2024-07-11 16:38:43.989801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.497 [2024-07-11 16:38:44.157940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.755 [2024-07-11 16:38:44.329843] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:08.013 16:38:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:08.013 16:38:44 -- common/autotest_common.sh@852 -- # return 0 00:23:08.013 16:38:44 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:08.272 [2024-07-11 16:38:45.020382] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:08.272 [2024-07-11 16:38:45.020618] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:08.272 [2024-07-11 16:38:45.020726] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:08.272 [2024-07-11 16:38:45.020818] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:08.272 [2024-07-11 16:38:45.020988] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:08.272 [2024-07-11 16:38:45.021159] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.272 16:38:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.540 16:38:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:08.540 "name": "Existed_Raid", 00:23:08.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.540 "strip_size_kb": 64, 00:23:08.540 "state": "configuring", 00:23:08.540 "raid_level": "raid5f", 00:23:08.540 "superblock": false, 00:23:08.540 "num_base_bdevs": 3, 00:23:08.540 "num_base_bdevs_discovered": 0, 00:23:08.540 "num_base_bdevs_operational": 3, 00:23:08.540 "base_bdevs_list": [ 00:23:08.540 { 00:23:08.540 "name": "BaseBdev1", 00:23:08.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.540 "is_configured": false, 00:23:08.540 "data_offset": 0, 00:23:08.540 "data_size": 0 00:23:08.540 }, 00:23:08.540 { 00:23:08.540 "name": "BaseBdev2", 00:23:08.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.540 "is_configured": false, 00:23:08.540 "data_offset": 0, 00:23:08.540 "data_size": 0 00:23:08.540 }, 00:23:08.540 { 00:23:08.540 "name": "BaseBdev3", 00:23:08.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.540 "is_configured": false, 00:23:08.540 "data_offset": 0, 00:23:08.540 "data_size": 0 00:23:08.540 } 00:23:08.540 ] 00:23:08.540 }' 00:23:08.540 16:38:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:08.540 16:38:45 -- common/autotest_common.sh@10 -- # set +x 00:23:09.114 16:38:45 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:09.372 [2024-07-11 16:38:46.132468] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:09.372 [2024-07-11 16:38:46.132612] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:09.372 16:38:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:09.630 [2024-07-11 16:38:46.308530] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:09.630 [2024-07-11 16:38:46.308739] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:09.630 [2024-07-11 16:38:46.308858] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:09.630 [2024-07-11 16:38:46.308913] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:09.630 [2024-07-11 16:38:46.309160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:09.630 [2024-07-11 16:38:46.309242] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:09.630 16:38:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:09.888 [2024-07-11 16:38:46.525742] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:09.888 BaseBdev1 00:23:09.888 16:38:46 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:09.888 16:38:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:09.888 16:38:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:09.888 16:38:46 -- common/autotest_common.sh@889 -- # local i 00:23:09.888 16:38:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:09.888 16:38:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:09.888 16:38:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:10.146 16:38:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:10.146 [ 00:23:10.146 { 00:23:10.146 "name": "BaseBdev1", 00:23:10.147 "aliases": [ 00:23:10.147 "220959b7-8980-4e28-86e4-7684e6e3f8e1" 00:23:10.147 ], 00:23:10.147 "product_name": "Malloc disk", 00:23:10.147 "block_size": 512, 00:23:10.147 "num_blocks": 65536, 00:23:10.147 "uuid": "220959b7-8980-4e28-86e4-7684e6e3f8e1", 00:23:10.147 "assigned_rate_limits": { 00:23:10.147 "rw_ios_per_sec": 0, 00:23:10.147 "rw_mbytes_per_sec": 0, 00:23:10.147 "r_mbytes_per_sec": 0, 00:23:10.147 "w_mbytes_per_sec": 0 00:23:10.147 }, 00:23:10.147 "claimed": true, 00:23:10.147 "claim_type": "exclusive_write", 00:23:10.147 "zoned": false, 00:23:10.147 "supported_io_types": { 00:23:10.147 "read": true, 00:23:10.147 "write": true, 00:23:10.147 "unmap": true, 00:23:10.147 "write_zeroes": true, 00:23:10.147 "flush": true, 00:23:10.147 "reset": true, 00:23:10.147 "compare": false, 00:23:10.147 "compare_and_write": false, 00:23:10.147 "abort": true, 00:23:10.147 "nvme_admin": false, 00:23:10.147 "nvme_io": false 00:23:10.147 }, 00:23:10.147 "memory_domains": [ 00:23:10.147 { 00:23:10.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.147 "dma_device_type": 2 00:23:10.147 } 00:23:10.147 ], 00:23:10.147 "driver_specific": {} 00:23:10.147 } 00:23:10.147 ] 00:23:10.147 16:38:46 -- common/autotest_common.sh@895 -- # return 0 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.147 16:38:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.406 16:38:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:10.406 "name": "Existed_Raid", 00:23:10.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.406 "strip_size_kb": 64, 00:23:10.406 "state": "configuring", 00:23:10.406 "raid_level": "raid5f", 00:23:10.406 "superblock": false, 00:23:10.406 "num_base_bdevs": 3, 00:23:10.406 "num_base_bdevs_discovered": 1, 00:23:10.406 "num_base_bdevs_operational": 3, 00:23:10.406 "base_bdevs_list": [ 00:23:10.406 { 00:23:10.406 "name": "BaseBdev1", 00:23:10.406 "uuid": "220959b7-8980-4e28-86e4-7684e6e3f8e1", 00:23:10.406 "is_configured": true, 00:23:10.406 "data_offset": 0, 00:23:10.406 "data_size": 65536 00:23:10.406 }, 00:23:10.406 { 00:23:10.406 "name": "BaseBdev2", 00:23:10.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.406 "is_configured": false, 00:23:10.406 "data_offset": 0, 00:23:10.406 "data_size": 0 00:23:10.406 }, 00:23:10.406 { 00:23:10.406 "name": "BaseBdev3", 00:23:10.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.406 "is_configured": false, 00:23:10.406 "data_offset": 0, 00:23:10.406 "data_size": 0 00:23:10.406 } 00:23:10.406 ] 00:23:10.406 }' 00:23:10.406 16:38:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:10.406 16:38:47 -- common/autotest_common.sh@10 -- # set +x 00:23:10.972 16:38:47 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:11.232 [2024-07-11 16:38:47.938014] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:11.232 [2024-07-11 16:38:47.938180] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:11.232 16:38:47 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:23:11.232 16:38:47 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:11.491 [2024-07-11 16:38:48.118093] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:11.491 [2024-07-11 16:38:48.119749] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:11.491 [2024-07-11 16:38:48.119929] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:11.491 [2024-07-11 16:38:48.120025] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:11.491 [2024-07-11 16:38:48.120083] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.491 16:38:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.750 16:38:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:11.750 "name": "Existed_Raid", 00:23:11.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.750 "strip_size_kb": 64, 00:23:11.750 "state": "configuring", 00:23:11.750 "raid_level": "raid5f", 00:23:11.750 "superblock": false, 00:23:11.750 "num_base_bdevs": 3, 00:23:11.750 "num_base_bdevs_discovered": 1, 00:23:11.750 "num_base_bdevs_operational": 3, 00:23:11.750 "base_bdevs_list": [ 00:23:11.750 { 00:23:11.750 "name": "BaseBdev1", 00:23:11.750 "uuid": "220959b7-8980-4e28-86e4-7684e6e3f8e1", 00:23:11.750 "is_configured": true, 00:23:11.750 "data_offset": 0, 00:23:11.750 "data_size": 65536 00:23:11.750 }, 00:23:11.750 { 00:23:11.750 "name": "BaseBdev2", 00:23:11.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.750 "is_configured": false, 00:23:11.750 "data_offset": 0, 00:23:11.750 "data_size": 0 00:23:11.750 }, 00:23:11.750 { 00:23:11.750 "name": "BaseBdev3", 00:23:11.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.750 "is_configured": false, 00:23:11.750 "data_offset": 0, 00:23:11.750 "data_size": 0 00:23:11.750 } 00:23:11.750 ] 00:23:11.750 }' 00:23:11.751 16:38:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:11.751 16:38:48 -- common/autotest_common.sh@10 -- # set +x 00:23:12.318 16:38:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:12.576 [2024-07-11 16:38:49.322061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:12.576 BaseBdev2 00:23:12.576 16:38:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:12.576 16:38:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:12.576 16:38:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:12.576 16:38:49 -- common/autotest_common.sh@889 -- # local i 00:23:12.576 16:38:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:12.576 16:38:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:12.576 16:38:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:12.835 16:38:49 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:13.094 [ 00:23:13.094 { 00:23:13.094 "name": "BaseBdev2", 00:23:13.094 "aliases": [ 00:23:13.094 "4d39afcc-dc55-4b53-855e-51876b76d84f" 00:23:13.094 ], 00:23:13.094 "product_name": "Malloc disk", 00:23:13.094 "block_size": 512, 00:23:13.094 "num_blocks": 65536, 00:23:13.094 "uuid": "4d39afcc-dc55-4b53-855e-51876b76d84f", 00:23:13.094 "assigned_rate_limits": { 00:23:13.094 "rw_ios_per_sec": 0, 00:23:13.094 "rw_mbytes_per_sec": 0, 00:23:13.094 "r_mbytes_per_sec": 0, 00:23:13.094 "w_mbytes_per_sec": 0 00:23:13.094 }, 00:23:13.094 "claimed": true, 00:23:13.094 "claim_type": "exclusive_write", 00:23:13.094 "zoned": false, 00:23:13.094 "supported_io_types": { 00:23:13.094 "read": true, 00:23:13.094 "write": true, 00:23:13.094 "unmap": true, 00:23:13.094 "write_zeroes": true, 00:23:13.094 "flush": true, 00:23:13.094 "reset": true, 00:23:13.094 "compare": false, 00:23:13.094 "compare_and_write": false, 00:23:13.094 "abort": true, 00:23:13.094 "nvme_admin": false, 00:23:13.094 "nvme_io": false 00:23:13.094 }, 00:23:13.094 "memory_domains": [ 00:23:13.094 { 00:23:13.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.094 "dma_device_type": 2 00:23:13.094 } 00:23:13.094 ], 00:23:13.094 "driver_specific": {} 00:23:13.094 } 00:23:13.094 ] 00:23:13.094 16:38:49 -- common/autotest_common.sh@895 -- # return 0 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.094 16:38:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:13.353 16:38:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:13.353 "name": "Existed_Raid", 00:23:13.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.353 "strip_size_kb": 64, 00:23:13.353 "state": "configuring", 00:23:13.353 "raid_level": "raid5f", 00:23:13.353 "superblock": false, 00:23:13.353 "num_base_bdevs": 3, 00:23:13.353 "num_base_bdevs_discovered": 2, 00:23:13.353 "num_base_bdevs_operational": 3, 00:23:13.353 "base_bdevs_list": [ 00:23:13.353 { 00:23:13.353 "name": "BaseBdev1", 00:23:13.353 "uuid": "220959b7-8980-4e28-86e4-7684e6e3f8e1", 00:23:13.353 "is_configured": true, 00:23:13.353 "data_offset": 0, 00:23:13.353 "data_size": 65536 00:23:13.353 }, 00:23:13.353 { 00:23:13.353 "name": "BaseBdev2", 00:23:13.353 "uuid": "4d39afcc-dc55-4b53-855e-51876b76d84f", 00:23:13.353 "is_configured": true, 00:23:13.353 "data_offset": 0, 00:23:13.353 "data_size": 65536 00:23:13.353 }, 00:23:13.353 { 00:23:13.353 "name": "BaseBdev3", 00:23:13.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.353 "is_configured": false, 00:23:13.353 "data_offset": 0, 00:23:13.353 "data_size": 0 00:23:13.353 } 00:23:13.353 ] 00:23:13.353 }' 00:23:13.353 16:38:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:13.353 16:38:49 -- common/autotest_common.sh@10 -- # set +x 00:23:13.919 16:38:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:14.178 [2024-07-11 16:38:50.825909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:14.178 [2024-07-11 16:38:50.826167] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:23:14.178 [2024-07-11 16:38:50.826209] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:14.178 [2024-07-11 16:38:50.826428] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:23:14.178 [2024-07-11 16:38:50.830877] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:23:14.178 [2024-07-11 16:38:50.831026] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:23:14.178 [2024-07-11 16:38:50.831384] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:14.178 BaseBdev3 00:23:14.178 16:38:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:14.178 16:38:50 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:14.178 16:38:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:14.178 16:38:50 -- common/autotest_common.sh@889 -- # local i 00:23:14.178 16:38:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:14.178 16:38:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:14.178 16:38:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:14.437 16:38:51 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:14.437 [ 00:23:14.437 { 00:23:14.437 "name": "BaseBdev3", 00:23:14.437 "aliases": [ 00:23:14.437 "35a881a5-89a7-4100-9468-8fcacf19de95" 00:23:14.437 ], 00:23:14.437 "product_name": "Malloc disk", 00:23:14.437 "block_size": 512, 00:23:14.437 "num_blocks": 65536, 00:23:14.437 "uuid": "35a881a5-89a7-4100-9468-8fcacf19de95", 00:23:14.437 "assigned_rate_limits": { 00:23:14.437 "rw_ios_per_sec": 0, 00:23:14.437 "rw_mbytes_per_sec": 0, 00:23:14.437 "r_mbytes_per_sec": 0, 00:23:14.437 "w_mbytes_per_sec": 0 00:23:14.437 }, 00:23:14.437 "claimed": true, 00:23:14.437 "claim_type": "exclusive_write", 00:23:14.437 "zoned": false, 00:23:14.437 "supported_io_types": { 00:23:14.437 "read": true, 00:23:14.437 "write": true, 00:23:14.437 "unmap": true, 00:23:14.437 "write_zeroes": true, 00:23:14.437 "flush": true, 00:23:14.437 "reset": true, 00:23:14.437 "compare": false, 00:23:14.437 "compare_and_write": false, 00:23:14.437 "abort": true, 00:23:14.437 "nvme_admin": false, 00:23:14.437 "nvme_io": false 00:23:14.437 }, 00:23:14.437 "memory_domains": [ 00:23:14.437 { 00:23:14.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.437 "dma_device_type": 2 00:23:14.437 } 00:23:14.437 ], 00:23:14.437 "driver_specific": {} 00:23:14.437 } 00:23:14.437 ] 00:23:14.437 16:38:51 -- common/autotest_common.sh@895 -- # return 0 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.437 16:38:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.696 16:38:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:14.696 "name": "Existed_Raid", 00:23:14.696 "uuid": "0f0f42cf-6db6-449f-bbf3-2774612330e9", 00:23:14.696 "strip_size_kb": 64, 00:23:14.696 "state": "online", 00:23:14.696 "raid_level": "raid5f", 00:23:14.696 "superblock": false, 00:23:14.696 "num_base_bdevs": 3, 00:23:14.696 "num_base_bdevs_discovered": 3, 00:23:14.696 "num_base_bdevs_operational": 3, 00:23:14.696 "base_bdevs_list": [ 00:23:14.696 { 00:23:14.696 "name": "BaseBdev1", 00:23:14.696 "uuid": "220959b7-8980-4e28-86e4-7684e6e3f8e1", 00:23:14.696 "is_configured": true, 00:23:14.696 "data_offset": 0, 00:23:14.696 "data_size": 65536 00:23:14.696 }, 00:23:14.696 { 00:23:14.696 "name": "BaseBdev2", 00:23:14.696 "uuid": "4d39afcc-dc55-4b53-855e-51876b76d84f", 00:23:14.696 "is_configured": true, 00:23:14.696 "data_offset": 0, 00:23:14.696 "data_size": 65536 00:23:14.696 }, 00:23:14.696 { 00:23:14.696 "name": "BaseBdev3", 00:23:14.696 "uuid": "35a881a5-89a7-4100-9468-8fcacf19de95", 00:23:14.696 "is_configured": true, 00:23:14.696 "data_offset": 0, 00:23:14.696 "data_size": 65536 00:23:14.696 } 00:23:14.696 ] 00:23:14.696 }' 00:23:14.696 16:38:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:14.696 16:38:51 -- common/autotest_common.sh@10 -- # set +x 00:23:15.631 16:38:52 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:15.631 [2024-07-11 16:38:52.240600] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:15.631 16:38:52 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:15.631 16:38:52 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.632 16:38:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.904 16:38:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:15.904 "name": "Existed_Raid", 00:23:15.904 "uuid": "0f0f42cf-6db6-449f-bbf3-2774612330e9", 00:23:15.904 "strip_size_kb": 64, 00:23:15.904 "state": "online", 00:23:15.904 "raid_level": "raid5f", 00:23:15.904 "superblock": false, 00:23:15.904 "num_base_bdevs": 3, 00:23:15.905 "num_base_bdevs_discovered": 2, 00:23:15.905 "num_base_bdevs_operational": 2, 00:23:15.905 "base_bdevs_list": [ 00:23:15.905 { 00:23:15.905 "name": null, 00:23:15.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.905 "is_configured": false, 00:23:15.905 "data_offset": 0, 00:23:15.905 "data_size": 65536 00:23:15.905 }, 00:23:15.905 { 00:23:15.905 "name": "BaseBdev2", 00:23:15.905 "uuid": "4d39afcc-dc55-4b53-855e-51876b76d84f", 00:23:15.905 "is_configured": true, 00:23:15.905 "data_offset": 0, 00:23:15.905 "data_size": 65536 00:23:15.905 }, 00:23:15.905 { 00:23:15.905 "name": "BaseBdev3", 00:23:15.905 "uuid": "35a881a5-89a7-4100-9468-8fcacf19de95", 00:23:15.905 "is_configured": true, 00:23:15.905 "data_offset": 0, 00:23:15.905 "data_size": 65536 00:23:15.905 } 00:23:15.905 ] 00:23:15.905 }' 00:23:15.905 16:38:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:15.905 16:38:52 -- common/autotest_common.sh@10 -- # set +x 00:23:16.502 16:38:53 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:16.502 16:38:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:16.502 16:38:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.502 16:38:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:16.760 16:38:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:16.760 16:38:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:16.760 16:38:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:17.018 [2024-07-11 16:38:53.631676] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:17.018 [2024-07-11 16:38:53.631851] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:17.018 [2024-07-11 16:38:53.632029] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.018 16:38:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:17.018 16:38:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:17.018 16:38:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.018 16:38:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:17.276 16:38:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:17.276 16:38:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:17.276 16:38:53 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:17.535 [2024-07-11 16:38:54.125033] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:17.535 [2024-07-11 16:38:54.125220] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:23:17.535 16:38:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:17.535 16:38:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:17.535 16:38:54 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.535 16:38:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:17.792 16:38:54 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:17.792 16:38:54 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:17.792 16:38:54 -- bdev/bdev_raid.sh@287 -- # killprocess 130278 00:23:17.792 16:38:54 -- common/autotest_common.sh@926 -- # '[' -z 130278 ']' 00:23:17.792 16:38:54 -- common/autotest_common.sh@930 -- # kill -0 130278 00:23:17.792 16:38:54 -- common/autotest_common.sh@931 -- # uname 00:23:17.792 16:38:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:17.792 16:38:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130278 00:23:17.792 16:38:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:17.792 16:38:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:17.792 killing process with pid 130278 00:23:17.792 16:38:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130278' 00:23:17.792 16:38:54 -- common/autotest_common.sh@945 -- # kill 130278 00:23:17.792 16:38:54 -- common/autotest_common.sh@950 -- # wait 130278 00:23:17.792 [2024-07-11 16:38:54.421039] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:17.792 [2024-07-11 16:38:54.421376] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:18.997 ************************************ 00:23:18.997 END TEST raid5f_state_function_test 00:23:18.997 ************************************ 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:18.997 00:23:18.997 real 0m11.579s 00:23:18.997 user 0m20.740s 00:23:18.997 sys 0m1.259s 00:23:18.997 16:38:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:18.997 16:38:55 -- common/autotest_common.sh@10 -- # set +x 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:23:18.997 16:38:55 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:23:18.997 16:38:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:18.997 16:38:55 -- common/autotest_common.sh@10 -- # set +x 00:23:18.997 ************************************ 00:23:18.997 START TEST raid5f_state_function_test_sb 00:23:18.997 ************************************ 00:23:18.997 16:38:55 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 3 true 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=130692 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:18.997 Process raid pid: 130692 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 130692' 00:23:18.997 16:38:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 130692 /var/tmp/spdk-raid.sock 00:23:18.997 16:38:55 -- common/autotest_common.sh@819 -- # '[' -z 130692 ']' 00:23:18.997 16:38:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:18.997 16:38:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:18.997 16:38:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:18.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:18.997 16:38:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:18.997 16:38:55 -- common/autotest_common.sh@10 -- # set +x 00:23:18.997 [2024-07-11 16:38:55.442725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:18.997 [2024-07-11 16:38:55.443054] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.997 [2024-07-11 16:38:55.586326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.997 [2024-07-11 16:38:55.748055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:19.254 [2024-07-11 16:38:55.916550] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:19.818 16:38:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:19.818 16:38:56 -- common/autotest_common.sh@852 -- # return 0 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:19.818 [2024-07-11 16:38:56.601139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:19.818 [2024-07-11 16:38:56.601343] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:19.818 [2024-07-11 16:38:56.601448] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:19.818 [2024-07-11 16:38:56.601568] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:19.818 [2024-07-11 16:38:56.601659] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:19.818 [2024-07-11 16:38:56.601798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.818 16:38:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:20.074 16:38:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:20.074 "name": "Existed_Raid", 00:23:20.074 "uuid": "fe64f06a-65f5-4e17-ab78-42c2c3a6e628", 00:23:20.074 "strip_size_kb": 64, 00:23:20.074 "state": "configuring", 00:23:20.074 "raid_level": "raid5f", 00:23:20.074 "superblock": true, 00:23:20.074 "num_base_bdevs": 3, 00:23:20.074 "num_base_bdevs_discovered": 0, 00:23:20.074 "num_base_bdevs_operational": 3, 00:23:20.074 "base_bdevs_list": [ 00:23:20.074 { 00:23:20.074 "name": "BaseBdev1", 00:23:20.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.074 "is_configured": false, 00:23:20.074 "data_offset": 0, 00:23:20.074 "data_size": 0 00:23:20.074 }, 00:23:20.074 { 00:23:20.074 "name": "BaseBdev2", 00:23:20.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.074 "is_configured": false, 00:23:20.074 "data_offset": 0, 00:23:20.074 "data_size": 0 00:23:20.074 }, 00:23:20.074 { 00:23:20.074 "name": "BaseBdev3", 00:23:20.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:20.074 "is_configured": false, 00:23:20.074 "data_offset": 0, 00:23:20.074 "data_size": 0 00:23:20.074 } 00:23:20.074 ] 00:23:20.074 }' 00:23:20.074 16:38:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:20.074 16:38:56 -- common/autotest_common.sh@10 -- # set +x 00:23:20.643 16:38:57 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:20.901 [2024-07-11 16:38:57.601176] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:20.901 [2024-07-11 16:38:57.601321] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:20.901 16:38:57 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:21.160 [2024-07-11 16:38:57.845270] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:21.160 [2024-07-11 16:38:57.845451] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:21.160 [2024-07-11 16:38:57.845548] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:21.160 [2024-07-11 16:38:57.845600] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:21.160 [2024-07-11 16:38:57.845684] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:21.160 [2024-07-11 16:38:57.845747] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:21.160 16:38:57 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:21.417 [2024-07-11 16:38:58.054389] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:21.417 BaseBdev1 00:23:21.417 16:38:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:21.417 16:38:58 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:21.417 16:38:58 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:21.417 16:38:58 -- common/autotest_common.sh@889 -- # local i 00:23:21.417 16:38:58 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:21.417 16:38:58 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:21.417 16:38:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:21.675 16:38:58 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:21.933 [ 00:23:21.933 { 00:23:21.933 "name": "BaseBdev1", 00:23:21.933 "aliases": [ 00:23:21.933 "2a78e409-8d68-4bb8-9e58-ca66aa954b4e" 00:23:21.933 ], 00:23:21.933 "product_name": "Malloc disk", 00:23:21.933 "block_size": 512, 00:23:21.933 "num_blocks": 65536, 00:23:21.933 "uuid": "2a78e409-8d68-4bb8-9e58-ca66aa954b4e", 00:23:21.933 "assigned_rate_limits": { 00:23:21.933 "rw_ios_per_sec": 0, 00:23:21.933 "rw_mbytes_per_sec": 0, 00:23:21.933 "r_mbytes_per_sec": 0, 00:23:21.933 "w_mbytes_per_sec": 0 00:23:21.933 }, 00:23:21.933 "claimed": true, 00:23:21.933 "claim_type": "exclusive_write", 00:23:21.933 "zoned": false, 00:23:21.933 "supported_io_types": { 00:23:21.933 "read": true, 00:23:21.933 "write": true, 00:23:21.933 "unmap": true, 00:23:21.933 "write_zeroes": true, 00:23:21.933 "flush": true, 00:23:21.933 "reset": true, 00:23:21.933 "compare": false, 00:23:21.933 "compare_and_write": false, 00:23:21.933 "abort": true, 00:23:21.933 "nvme_admin": false, 00:23:21.933 "nvme_io": false 00:23:21.933 }, 00:23:21.933 "memory_domains": [ 00:23:21.933 { 00:23:21.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.933 "dma_device_type": 2 00:23:21.933 } 00:23:21.933 ], 00:23:21.933 "driver_specific": {} 00:23:21.933 } 00:23:21.933 ] 00:23:21.933 16:38:58 -- common/autotest_common.sh@895 -- # return 0 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:21.933 "name": "Existed_Raid", 00:23:21.933 "uuid": "0352002b-10b6-4f32-a422-ab523cb597a2", 00:23:21.933 "strip_size_kb": 64, 00:23:21.933 "state": "configuring", 00:23:21.933 "raid_level": "raid5f", 00:23:21.933 "superblock": true, 00:23:21.933 "num_base_bdevs": 3, 00:23:21.933 "num_base_bdevs_discovered": 1, 00:23:21.933 "num_base_bdevs_operational": 3, 00:23:21.933 "base_bdevs_list": [ 00:23:21.933 { 00:23:21.933 "name": "BaseBdev1", 00:23:21.933 "uuid": "2a78e409-8d68-4bb8-9e58-ca66aa954b4e", 00:23:21.933 "is_configured": true, 00:23:21.933 "data_offset": 2048, 00:23:21.933 "data_size": 63488 00:23:21.933 }, 00:23:21.933 { 00:23:21.933 "name": "BaseBdev2", 00:23:21.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.933 "is_configured": false, 00:23:21.933 "data_offset": 0, 00:23:21.933 "data_size": 0 00:23:21.933 }, 00:23:21.933 { 00:23:21.933 "name": "BaseBdev3", 00:23:21.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.933 "is_configured": false, 00:23:21.933 "data_offset": 0, 00:23:21.933 "data_size": 0 00:23:21.933 } 00:23:21.933 ] 00:23:21.933 }' 00:23:21.933 16:38:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:21.933 16:38:58 -- common/autotest_common.sh@10 -- # set +x 00:23:22.868 16:38:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:22.868 [2024-07-11 16:38:59.606681] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:22.868 [2024-07-11 16:38:59.606877] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:22.868 16:38:59 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:22.868 16:38:59 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:23.125 16:38:59 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:23.384 BaseBdev1 00:23:23.384 16:39:00 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:23.384 16:39:00 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:23:23.384 16:39:00 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:23.384 16:39:00 -- common/autotest_common.sh@889 -- # local i 00:23:23.384 16:39:00 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:23.384 16:39:00 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:23.384 16:39:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:23.642 16:39:00 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:23.642 [ 00:23:23.642 { 00:23:23.642 "name": "BaseBdev1", 00:23:23.642 "aliases": [ 00:23:23.642 "9fe288e5-6209-485d-874a-d27fa2a2e11a" 00:23:23.642 ], 00:23:23.642 "product_name": "Malloc disk", 00:23:23.642 "block_size": 512, 00:23:23.642 "num_blocks": 65536, 00:23:23.642 "uuid": "9fe288e5-6209-485d-874a-d27fa2a2e11a", 00:23:23.642 "assigned_rate_limits": { 00:23:23.642 "rw_ios_per_sec": 0, 00:23:23.642 "rw_mbytes_per_sec": 0, 00:23:23.642 "r_mbytes_per_sec": 0, 00:23:23.642 "w_mbytes_per_sec": 0 00:23:23.642 }, 00:23:23.642 "claimed": false, 00:23:23.642 "zoned": false, 00:23:23.642 "supported_io_types": { 00:23:23.642 "read": true, 00:23:23.642 "write": true, 00:23:23.642 "unmap": true, 00:23:23.643 "write_zeroes": true, 00:23:23.643 "flush": true, 00:23:23.643 "reset": true, 00:23:23.643 "compare": false, 00:23:23.643 "compare_and_write": false, 00:23:23.643 "abort": true, 00:23:23.643 "nvme_admin": false, 00:23:23.643 "nvme_io": false 00:23:23.643 }, 00:23:23.643 "memory_domains": [ 00:23:23.643 { 00:23:23.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.643 "dma_device_type": 2 00:23:23.643 } 00:23:23.643 ], 00:23:23.643 "driver_specific": {} 00:23:23.643 } 00:23:23.643 ] 00:23:23.643 16:39:00 -- common/autotest_common.sh@895 -- # return 0 00:23:23.643 16:39:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:23:23.902 [2024-07-11 16:39:00.593334] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:23.902 [2024-07-11 16:39:00.594969] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:23.902 [2024-07-11 16:39:00.595144] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:23.902 [2024-07-11 16:39:00.595241] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:23.902 [2024-07-11 16:39:00.595383] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.902 16:39:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.161 16:39:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:24.161 "name": "Existed_Raid", 00:23:24.161 "uuid": "cfd03c9a-5c11-4669-91fe-6926b7dbb0e3", 00:23:24.161 "strip_size_kb": 64, 00:23:24.161 "state": "configuring", 00:23:24.161 "raid_level": "raid5f", 00:23:24.161 "superblock": true, 00:23:24.161 "num_base_bdevs": 3, 00:23:24.161 "num_base_bdevs_discovered": 1, 00:23:24.161 "num_base_bdevs_operational": 3, 00:23:24.161 "base_bdevs_list": [ 00:23:24.161 { 00:23:24.161 "name": "BaseBdev1", 00:23:24.161 "uuid": "9fe288e5-6209-485d-874a-d27fa2a2e11a", 00:23:24.161 "is_configured": true, 00:23:24.161 "data_offset": 2048, 00:23:24.161 "data_size": 63488 00:23:24.161 }, 00:23:24.161 { 00:23:24.161 "name": "BaseBdev2", 00:23:24.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.161 "is_configured": false, 00:23:24.161 "data_offset": 0, 00:23:24.161 "data_size": 0 00:23:24.161 }, 00:23:24.161 { 00:23:24.161 "name": "BaseBdev3", 00:23:24.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.161 "is_configured": false, 00:23:24.161 "data_offset": 0, 00:23:24.161 "data_size": 0 00:23:24.161 } 00:23:24.161 ] 00:23:24.161 }' 00:23:24.161 16:39:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:24.161 16:39:00 -- common/autotest_common.sh@10 -- # set +x 00:23:24.729 16:39:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:24.988 [2024-07-11 16:39:01.669587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:24.988 BaseBdev2 00:23:24.988 16:39:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:24.988 16:39:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:23:24.988 16:39:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:24.988 16:39:01 -- common/autotest_common.sh@889 -- # local i 00:23:24.988 16:39:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:24.988 16:39:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:24.988 16:39:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:25.247 16:39:01 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:25.247 [ 00:23:25.247 { 00:23:25.247 "name": "BaseBdev2", 00:23:25.247 "aliases": [ 00:23:25.247 "b401b628-9baa-4604-bf52-5c0ae30d08d6" 00:23:25.247 ], 00:23:25.247 "product_name": "Malloc disk", 00:23:25.247 "block_size": 512, 00:23:25.247 "num_blocks": 65536, 00:23:25.247 "uuid": "b401b628-9baa-4604-bf52-5c0ae30d08d6", 00:23:25.247 "assigned_rate_limits": { 00:23:25.247 "rw_ios_per_sec": 0, 00:23:25.247 "rw_mbytes_per_sec": 0, 00:23:25.247 "r_mbytes_per_sec": 0, 00:23:25.247 "w_mbytes_per_sec": 0 00:23:25.247 }, 00:23:25.247 "claimed": true, 00:23:25.247 "claim_type": "exclusive_write", 00:23:25.247 "zoned": false, 00:23:25.247 "supported_io_types": { 00:23:25.247 "read": true, 00:23:25.247 "write": true, 00:23:25.247 "unmap": true, 00:23:25.247 "write_zeroes": true, 00:23:25.247 "flush": true, 00:23:25.247 "reset": true, 00:23:25.247 "compare": false, 00:23:25.247 "compare_and_write": false, 00:23:25.247 "abort": true, 00:23:25.247 "nvme_admin": false, 00:23:25.247 "nvme_io": false 00:23:25.247 }, 00:23:25.247 "memory_domains": [ 00:23:25.247 { 00:23:25.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:25.247 "dma_device_type": 2 00:23:25.247 } 00:23:25.247 ], 00:23:25.247 "driver_specific": {} 00:23:25.247 } 00:23:25.247 ] 00:23:25.247 16:39:02 -- common/autotest_common.sh@895 -- # return 0 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:25.247 16:39:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.248 16:39:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:25.507 16:39:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:25.507 "name": "Existed_Raid", 00:23:25.507 "uuid": "cfd03c9a-5c11-4669-91fe-6926b7dbb0e3", 00:23:25.507 "strip_size_kb": 64, 00:23:25.507 "state": "configuring", 00:23:25.507 "raid_level": "raid5f", 00:23:25.507 "superblock": true, 00:23:25.507 "num_base_bdevs": 3, 00:23:25.507 "num_base_bdevs_discovered": 2, 00:23:25.507 "num_base_bdevs_operational": 3, 00:23:25.507 "base_bdevs_list": [ 00:23:25.507 { 00:23:25.507 "name": "BaseBdev1", 00:23:25.507 "uuid": "9fe288e5-6209-485d-874a-d27fa2a2e11a", 00:23:25.507 "is_configured": true, 00:23:25.507 "data_offset": 2048, 00:23:25.507 "data_size": 63488 00:23:25.507 }, 00:23:25.507 { 00:23:25.507 "name": "BaseBdev2", 00:23:25.507 "uuid": "b401b628-9baa-4604-bf52-5c0ae30d08d6", 00:23:25.507 "is_configured": true, 00:23:25.507 "data_offset": 2048, 00:23:25.507 "data_size": 63488 00:23:25.507 }, 00:23:25.507 { 00:23:25.507 "name": "BaseBdev3", 00:23:25.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:25.507 "is_configured": false, 00:23:25.507 "data_offset": 0, 00:23:25.507 "data_size": 0 00:23:25.507 } 00:23:25.507 ] 00:23:25.507 }' 00:23:25.507 16:39:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:25.507 16:39:02 -- common/autotest_common.sh@10 -- # set +x 00:23:26.444 16:39:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:26.444 [2024-07-11 16:39:03.210707] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:26.444 [2024-07-11 16:39:03.211116] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:23:26.444 [2024-07-11 16:39:03.211239] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:26.444 BaseBdev3 00:23:26.444 [2024-07-11 16:39:03.211391] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:23:26.444 [2024-07-11 16:39:03.216328] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:23:26.444 [2024-07-11 16:39:03.216475] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:23:26.444 [2024-07-11 16:39:03.216800] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.444 16:39:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:26.444 16:39:03 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:23:26.444 16:39:03 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:23:26.444 16:39:03 -- common/autotest_common.sh@889 -- # local i 00:23:26.444 16:39:03 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:23:26.444 16:39:03 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:23:26.444 16:39:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:26.701 16:39:03 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:26.959 [ 00:23:26.959 { 00:23:26.959 "name": "BaseBdev3", 00:23:26.959 "aliases": [ 00:23:26.959 "3d1d3ddb-0b27-4d3f-b376-e5259463361c" 00:23:26.959 ], 00:23:26.959 "product_name": "Malloc disk", 00:23:26.959 "block_size": 512, 00:23:26.959 "num_blocks": 65536, 00:23:26.959 "uuid": "3d1d3ddb-0b27-4d3f-b376-e5259463361c", 00:23:26.959 "assigned_rate_limits": { 00:23:26.959 "rw_ios_per_sec": 0, 00:23:26.959 "rw_mbytes_per_sec": 0, 00:23:26.959 "r_mbytes_per_sec": 0, 00:23:26.959 "w_mbytes_per_sec": 0 00:23:26.959 }, 00:23:26.959 "claimed": true, 00:23:26.959 "claim_type": "exclusive_write", 00:23:26.959 "zoned": false, 00:23:26.959 "supported_io_types": { 00:23:26.959 "read": true, 00:23:26.959 "write": true, 00:23:26.959 "unmap": true, 00:23:26.959 "write_zeroes": true, 00:23:26.959 "flush": true, 00:23:26.959 "reset": true, 00:23:26.959 "compare": false, 00:23:26.959 "compare_and_write": false, 00:23:26.959 "abort": true, 00:23:26.959 "nvme_admin": false, 00:23:26.959 "nvme_io": false 00:23:26.959 }, 00:23:26.959 "memory_domains": [ 00:23:26.959 { 00:23:26.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.959 "dma_device_type": 2 00:23:26.959 } 00:23:26.959 ], 00:23:26.959 "driver_specific": {} 00:23:26.959 } 00:23:26.959 ] 00:23:26.959 16:39:03 -- common/autotest_common.sh@895 -- # return 0 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.959 16:39:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.960 16:39:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.218 16:39:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:27.218 "name": "Existed_Raid", 00:23:27.218 "uuid": "cfd03c9a-5c11-4669-91fe-6926b7dbb0e3", 00:23:27.218 "strip_size_kb": 64, 00:23:27.218 "state": "online", 00:23:27.218 "raid_level": "raid5f", 00:23:27.218 "superblock": true, 00:23:27.218 "num_base_bdevs": 3, 00:23:27.218 "num_base_bdevs_discovered": 3, 00:23:27.218 "num_base_bdevs_operational": 3, 00:23:27.218 "base_bdevs_list": [ 00:23:27.218 { 00:23:27.218 "name": "BaseBdev1", 00:23:27.218 "uuid": "9fe288e5-6209-485d-874a-d27fa2a2e11a", 00:23:27.218 "is_configured": true, 00:23:27.218 "data_offset": 2048, 00:23:27.218 "data_size": 63488 00:23:27.218 }, 00:23:27.218 { 00:23:27.218 "name": "BaseBdev2", 00:23:27.218 "uuid": "b401b628-9baa-4604-bf52-5c0ae30d08d6", 00:23:27.218 "is_configured": true, 00:23:27.218 "data_offset": 2048, 00:23:27.218 "data_size": 63488 00:23:27.218 }, 00:23:27.218 { 00:23:27.218 "name": "BaseBdev3", 00:23:27.218 "uuid": "3d1d3ddb-0b27-4d3f-b376-e5259463361c", 00:23:27.218 "is_configured": true, 00:23:27.218 "data_offset": 2048, 00:23:27.218 "data_size": 63488 00:23:27.218 } 00:23:27.218 ] 00:23:27.218 }' 00:23:27.218 16:39:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:27.218 16:39:03 -- common/autotest_common.sh@10 -- # set +x 00:23:27.784 16:39:04 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:28.042 [2024-07-11 16:39:04.686794] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:28.042 16:39:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:28.042 16:39:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:28.042 16:39:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:28.042 16:39:04 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.043 16:39:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:28.301 16:39:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:28.301 "name": "Existed_Raid", 00:23:28.301 "uuid": "cfd03c9a-5c11-4669-91fe-6926b7dbb0e3", 00:23:28.301 "strip_size_kb": 64, 00:23:28.301 "state": "online", 00:23:28.301 "raid_level": "raid5f", 00:23:28.301 "superblock": true, 00:23:28.301 "num_base_bdevs": 3, 00:23:28.301 "num_base_bdevs_discovered": 2, 00:23:28.301 "num_base_bdevs_operational": 2, 00:23:28.301 "base_bdevs_list": [ 00:23:28.301 { 00:23:28.301 "name": null, 00:23:28.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:28.301 "is_configured": false, 00:23:28.301 "data_offset": 2048, 00:23:28.301 "data_size": 63488 00:23:28.301 }, 00:23:28.301 { 00:23:28.301 "name": "BaseBdev2", 00:23:28.301 "uuid": "b401b628-9baa-4604-bf52-5c0ae30d08d6", 00:23:28.301 "is_configured": true, 00:23:28.301 "data_offset": 2048, 00:23:28.301 "data_size": 63488 00:23:28.301 }, 00:23:28.301 { 00:23:28.301 "name": "BaseBdev3", 00:23:28.301 "uuid": "3d1d3ddb-0b27-4d3f-b376-e5259463361c", 00:23:28.301 "is_configured": true, 00:23:28.301 "data_offset": 2048, 00:23:28.301 "data_size": 63488 00:23:28.301 } 00:23:28.301 ] 00:23:28.301 }' 00:23:28.301 16:39:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:28.301 16:39:05 -- common/autotest_common.sh@10 -- # set +x 00:23:29.236 16:39:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:29.236 16:39:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:29.236 16:39:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.236 16:39:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:29.236 16:39:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:29.236 16:39:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:29.236 16:39:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:29.495 [2024-07-11 16:39:06.201482] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:29.495 [2024-07-11 16:39:06.201624] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:29.495 [2024-07-11 16:39:06.201788] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:29.495 16:39:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:29.495 16:39:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:29.495 16:39:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:29.495 16:39:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.766 16:39:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:29.766 16:39:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:29.766 16:39:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:30.037 [2024-07-11 16:39:06.680630] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:30.037 [2024-07-11 16:39:06.680820] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:23:30.037 16:39:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:30.037 16:39:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:30.037 16:39:06 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.037 16:39:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:30.296 16:39:07 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:30.296 16:39:07 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:30.296 16:39:07 -- bdev/bdev_raid.sh@287 -- # killprocess 130692 00:23:30.296 16:39:07 -- common/autotest_common.sh@926 -- # '[' -z 130692 ']' 00:23:30.296 16:39:07 -- common/autotest_common.sh@930 -- # kill -0 130692 00:23:30.296 16:39:07 -- common/autotest_common.sh@931 -- # uname 00:23:30.296 16:39:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:30.296 16:39:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 130692 00:23:30.296 killing process with pid 130692 00:23:30.296 16:39:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:30.296 16:39:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:30.296 16:39:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 130692' 00:23:30.296 16:39:07 -- common/autotest_common.sh@945 -- # kill 130692 00:23:30.296 16:39:07 -- common/autotest_common.sh@950 -- # wait 130692 00:23:30.296 [2024-07-11 16:39:07.027746] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:30.296 [2024-07-11 16:39:07.027880] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:31.230 ************************************ 00:23:31.230 END TEST raid5f_state_function_test_sb 00:23:31.230 ************************************ 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:31.230 00:23:31.230 real 0m12.544s 00:23:31.230 user 0m22.548s 00:23:31.230 sys 0m1.245s 00:23:31.230 16:39:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:31.230 16:39:07 -- common/autotest_common.sh@10 -- # set +x 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:23:31.230 16:39:07 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:23:31.230 16:39:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:31.230 16:39:07 -- common/autotest_common.sh@10 -- # set +x 00:23:31.230 ************************************ 00:23:31.230 START TEST raid5f_superblock_test 00:23:31.230 ************************************ 00:23:31.230 16:39:07 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 3 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@357 -- # raid_pid=131092 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:31.230 16:39:07 -- bdev/bdev_raid.sh@358 -- # waitforlisten 131092 /var/tmp/spdk-raid.sock 00:23:31.230 16:39:07 -- common/autotest_common.sh@819 -- # '[' -z 131092 ']' 00:23:31.230 16:39:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:31.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:31.230 16:39:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:31.230 16:39:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:31.230 16:39:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:31.230 16:39:07 -- common/autotest_common.sh@10 -- # set +x 00:23:31.488 [2024-07-11 16:39:08.038379] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:31.488 [2024-07-11 16:39:08.038551] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131092 ] 00:23:31.488 [2024-07-11 16:39:08.192777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.747 [2024-07-11 16:39:08.421353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.005 [2024-07-11 16:39:08.586974] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:32.262 16:39:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:32.262 16:39:08 -- common/autotest_common.sh@852 -- # return 0 00:23:32.262 16:39:08 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:32.262 16:39:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:32.262 16:39:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:32.262 16:39:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:32.262 16:39:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:32.262 16:39:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:32.262 16:39:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:32.262 16:39:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:32.262 16:39:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:32.520 malloc1 00:23:32.520 16:39:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:32.778 [2024-07-11 16:39:09.424282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:32.778 [2024-07-11 16:39:09.424378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.778 [2024-07-11 16:39:09.424409] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:32.778 [2024-07-11 16:39:09.424453] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.778 [2024-07-11 16:39:09.426658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.778 [2024-07-11 16:39:09.426720] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:32.778 pt1 00:23:32.778 16:39:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:32.778 16:39:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:32.778 16:39:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:32.778 16:39:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:32.778 16:39:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:32.778 16:39:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:32.778 16:39:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:32.778 16:39:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:32.778 16:39:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:33.036 malloc2 00:23:33.036 16:39:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:33.294 [2024-07-11 16:39:09.904915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:33.294 [2024-07-11 16:39:09.905025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.294 [2024-07-11 16:39:09.905065] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:33.294 [2024-07-11 16:39:09.905116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.294 [2024-07-11 16:39:09.907055] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.294 [2024-07-11 16:39:09.907116] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:33.294 pt2 00:23:33.294 16:39:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:33.294 16:39:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:33.294 16:39:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:33.294 16:39:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:33.294 16:39:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:33.294 16:39:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:33.294 16:39:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:33.294 16:39:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:33.294 16:39:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:33.552 malloc3 00:23:33.553 16:39:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:33.812 [2024-07-11 16:39:10.393310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:33.812 [2024-07-11 16:39:10.393422] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.812 [2024-07-11 16:39:10.393459] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:33.812 [2024-07-11 16:39:10.393499] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.812 [2024-07-11 16:39:10.395420] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.812 [2024-07-11 16:39:10.395469] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:33.812 pt3 00:23:33.812 16:39:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:33.812 16:39:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:33.812 16:39:10 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:34.071 [2024-07-11 16:39:10.661354] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:34.071 [2024-07-11 16:39:10.663116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:34.071 [2024-07-11 16:39:10.663183] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:34.071 [2024-07-11 16:39:10.663390] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:34.071 [2024-07-11 16:39:10.663404] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:34.071 [2024-07-11 16:39:10.663526] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:34.071 [2024-07-11 16:39:10.667794] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:34.071 [2024-07-11 16:39:10.667820] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:34.071 [2024-07-11 16:39:10.667976] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.071 16:39:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.330 16:39:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:34.330 "name": "raid_bdev1", 00:23:34.330 "uuid": "13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e", 00:23:34.330 "strip_size_kb": 64, 00:23:34.330 "state": "online", 00:23:34.330 "raid_level": "raid5f", 00:23:34.330 "superblock": true, 00:23:34.330 "num_base_bdevs": 3, 00:23:34.330 "num_base_bdevs_discovered": 3, 00:23:34.330 "num_base_bdevs_operational": 3, 00:23:34.330 "base_bdevs_list": [ 00:23:34.330 { 00:23:34.330 "name": "pt1", 00:23:34.330 "uuid": "7ca6f940-3df6-5f87-bdd1-344770a9f67f", 00:23:34.330 "is_configured": true, 00:23:34.330 "data_offset": 2048, 00:23:34.330 "data_size": 63488 00:23:34.330 }, 00:23:34.330 { 00:23:34.330 "name": "pt2", 00:23:34.330 "uuid": "4a6ba174-8d3a-550b-8d69-9ddf4f044a69", 00:23:34.330 "is_configured": true, 00:23:34.330 "data_offset": 2048, 00:23:34.330 "data_size": 63488 00:23:34.330 }, 00:23:34.330 { 00:23:34.330 "name": "pt3", 00:23:34.330 "uuid": "aeacae0a-d364-582b-9fe5-270f8e2f56df", 00:23:34.330 "is_configured": true, 00:23:34.330 "data_offset": 2048, 00:23:34.330 "data_size": 63488 00:23:34.330 } 00:23:34.330 ] 00:23:34.330 }' 00:23:34.330 16:39:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:34.330 16:39:10 -- common/autotest_common.sh@10 -- # set +x 00:23:34.898 16:39:11 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:34.898 16:39:11 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:35.156 [2024-07-11 16:39:11.816977] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:35.157 16:39:11 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e 00:23:35.157 16:39:11 -- bdev/bdev_raid.sh@380 -- # '[' -z 13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e ']' 00:23:35.157 16:39:11 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:35.415 [2024-07-11 16:39:12.116878] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:35.415 [2024-07-11 16:39:12.116903] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:35.415 [2024-07-11 16:39:12.116999] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:35.415 [2024-07-11 16:39:12.117080] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:35.415 [2024-07-11 16:39:12.117093] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:35.415 16:39:12 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.415 16:39:12 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:35.673 16:39:12 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:35.673 16:39:12 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:35.673 16:39:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:35.673 16:39:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:35.932 16:39:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:35.932 16:39:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:35.932 16:39:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:35.932 16:39:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:36.190 16:39:12 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:36.190 16:39:12 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:36.447 16:39:13 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:36.447 16:39:13 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:36.448 16:39:13 -- common/autotest_common.sh@640 -- # local es=0 00:23:36.448 16:39:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:36.448 16:39:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:36.448 16:39:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:36.448 16:39:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:36.448 16:39:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:36.448 16:39:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:36.448 16:39:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:36.448 16:39:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:36.448 16:39:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:36.448 16:39:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:36.704 [2024-07-11 16:39:13.313110] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:36.704 [2024-07-11 16:39:13.314724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:36.704 [2024-07-11 16:39:13.314777] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:36.704 [2024-07-11 16:39:13.314826] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:36.704 [2024-07-11 16:39:13.314906] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:36.704 [2024-07-11 16:39:13.314939] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:36.704 [2024-07-11 16:39:13.315015] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:36.704 [2024-07-11 16:39:13.315027] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:23:36.704 request: 00:23:36.704 { 00:23:36.704 "name": "raid_bdev1", 00:23:36.704 "raid_level": "raid5f", 00:23:36.704 "base_bdevs": [ 00:23:36.704 "malloc1", 00:23:36.704 "malloc2", 00:23:36.704 "malloc3" 00:23:36.704 ], 00:23:36.704 "superblock": false, 00:23:36.704 "strip_size_kb": 64, 00:23:36.704 "method": "bdev_raid_create", 00:23:36.704 "req_id": 1 00:23:36.704 } 00:23:36.704 Got JSON-RPC error response 00:23:36.704 response: 00:23:36.704 { 00:23:36.704 "code": -17, 00:23:36.704 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:36.704 } 00:23:36.704 16:39:13 -- common/autotest_common.sh@643 -- # es=1 00:23:36.704 16:39:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:36.704 16:39:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:36.704 16:39:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:36.704 16:39:13 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.704 16:39:13 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:36.704 16:39:13 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:36.704 16:39:13 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:36.704 16:39:13 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:36.962 [2024-07-11 16:39:13.685161] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:36.962 [2024-07-11 16:39:13.685256] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.962 [2024-07-11 16:39:13.685290] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:36.962 [2024-07-11 16:39:13.685310] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.962 [2024-07-11 16:39:13.687257] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.962 [2024-07-11 16:39:13.687303] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:36.962 [2024-07-11 16:39:13.687417] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:36.962 [2024-07-11 16:39:13.687465] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:36.962 pt1 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.962 16:39:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:37.219 16:39:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:37.219 "name": "raid_bdev1", 00:23:37.219 "uuid": "13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e", 00:23:37.219 "strip_size_kb": 64, 00:23:37.219 "state": "configuring", 00:23:37.219 "raid_level": "raid5f", 00:23:37.219 "superblock": true, 00:23:37.219 "num_base_bdevs": 3, 00:23:37.219 "num_base_bdevs_discovered": 1, 00:23:37.219 "num_base_bdevs_operational": 3, 00:23:37.219 "base_bdevs_list": [ 00:23:37.219 { 00:23:37.219 "name": "pt1", 00:23:37.219 "uuid": "7ca6f940-3df6-5f87-bdd1-344770a9f67f", 00:23:37.219 "is_configured": true, 00:23:37.219 "data_offset": 2048, 00:23:37.219 "data_size": 63488 00:23:37.219 }, 00:23:37.219 { 00:23:37.219 "name": null, 00:23:37.219 "uuid": "4a6ba174-8d3a-550b-8d69-9ddf4f044a69", 00:23:37.219 "is_configured": false, 00:23:37.219 "data_offset": 2048, 00:23:37.219 "data_size": 63488 00:23:37.219 }, 00:23:37.219 { 00:23:37.219 "name": null, 00:23:37.219 "uuid": "aeacae0a-d364-582b-9fe5-270f8e2f56df", 00:23:37.219 "is_configured": false, 00:23:37.219 "data_offset": 2048, 00:23:37.219 "data_size": 63488 00:23:37.219 } 00:23:37.219 ] 00:23:37.219 }' 00:23:37.219 16:39:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:37.219 16:39:13 -- common/autotest_common.sh@10 -- # set +x 00:23:37.786 16:39:14 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:23:37.786 16:39:14 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:38.045 [2024-07-11 16:39:14.781484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:38.045 [2024-07-11 16:39:14.781584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.045 [2024-07-11 16:39:14.781631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:38.045 [2024-07-11 16:39:14.781651] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.045 [2024-07-11 16:39:14.782127] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.045 [2024-07-11 16:39:14.782183] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:38.045 [2024-07-11 16:39:14.782309] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:38.045 [2024-07-11 16:39:14.782345] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:38.045 pt2 00:23:38.045 16:39:14 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:38.304 [2024-07-11 16:39:14.977528] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.304 16:39:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.563 16:39:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:38.563 "name": "raid_bdev1", 00:23:38.563 "uuid": "13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e", 00:23:38.563 "strip_size_kb": 64, 00:23:38.563 "state": "configuring", 00:23:38.563 "raid_level": "raid5f", 00:23:38.563 "superblock": true, 00:23:38.563 "num_base_bdevs": 3, 00:23:38.563 "num_base_bdevs_discovered": 1, 00:23:38.563 "num_base_bdevs_operational": 3, 00:23:38.563 "base_bdevs_list": [ 00:23:38.563 { 00:23:38.563 "name": "pt1", 00:23:38.563 "uuid": "7ca6f940-3df6-5f87-bdd1-344770a9f67f", 00:23:38.563 "is_configured": true, 00:23:38.563 "data_offset": 2048, 00:23:38.563 "data_size": 63488 00:23:38.563 }, 00:23:38.563 { 00:23:38.563 "name": null, 00:23:38.563 "uuid": "4a6ba174-8d3a-550b-8d69-9ddf4f044a69", 00:23:38.563 "is_configured": false, 00:23:38.563 "data_offset": 2048, 00:23:38.563 "data_size": 63488 00:23:38.563 }, 00:23:38.563 { 00:23:38.563 "name": null, 00:23:38.563 "uuid": "aeacae0a-d364-582b-9fe5-270f8e2f56df", 00:23:38.563 "is_configured": false, 00:23:38.563 "data_offset": 2048, 00:23:38.563 "data_size": 63488 00:23:38.563 } 00:23:38.563 ] 00:23:38.563 }' 00:23:38.563 16:39:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:38.563 16:39:15 -- common/autotest_common.sh@10 -- # set +x 00:23:39.130 16:39:15 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:39.130 16:39:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:39.130 16:39:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:39.390 [2024-07-11 16:39:16.161731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:39.390 [2024-07-11 16:39:16.161845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.390 [2024-07-11 16:39:16.161882] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:39.390 [2024-07-11 16:39:16.161910] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.390 [2024-07-11 16:39:16.162475] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.390 [2024-07-11 16:39:16.162536] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:39.390 [2024-07-11 16:39:16.162661] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:39.390 [2024-07-11 16:39:16.162687] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:39.390 pt2 00:23:39.390 16:39:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:39.390 16:39:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:39.390 16:39:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:39.649 [2024-07-11 16:39:16.337758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:39.649 [2024-07-11 16:39:16.337832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:39.649 [2024-07-11 16:39:16.337861] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:39.649 [2024-07-11 16:39:16.337883] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:39.649 [2024-07-11 16:39:16.338277] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:39.649 [2024-07-11 16:39:16.338322] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:39.649 [2024-07-11 16:39:16.338417] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:39.649 [2024-07-11 16:39:16.338442] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:39.649 [2024-07-11 16:39:16.338561] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:23:39.649 [2024-07-11 16:39:16.338575] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:39.649 [2024-07-11 16:39:16.338678] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:39.649 [2024-07-11 16:39:16.342835] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:23:39.649 [2024-07-11 16:39:16.342861] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:23:39.649 [2024-07-11 16:39:16.343041] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.649 pt3 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.649 16:39:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.908 16:39:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:39.908 "name": "raid_bdev1", 00:23:39.908 "uuid": "13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e", 00:23:39.908 "strip_size_kb": 64, 00:23:39.908 "state": "online", 00:23:39.908 "raid_level": "raid5f", 00:23:39.908 "superblock": true, 00:23:39.908 "num_base_bdevs": 3, 00:23:39.908 "num_base_bdevs_discovered": 3, 00:23:39.908 "num_base_bdevs_operational": 3, 00:23:39.908 "base_bdevs_list": [ 00:23:39.908 { 00:23:39.908 "name": "pt1", 00:23:39.908 "uuid": "7ca6f940-3df6-5f87-bdd1-344770a9f67f", 00:23:39.908 "is_configured": true, 00:23:39.908 "data_offset": 2048, 00:23:39.908 "data_size": 63488 00:23:39.908 }, 00:23:39.908 { 00:23:39.908 "name": "pt2", 00:23:39.908 "uuid": "4a6ba174-8d3a-550b-8d69-9ddf4f044a69", 00:23:39.908 "is_configured": true, 00:23:39.908 "data_offset": 2048, 00:23:39.908 "data_size": 63488 00:23:39.908 }, 00:23:39.908 { 00:23:39.908 "name": "pt3", 00:23:39.908 "uuid": "aeacae0a-d364-582b-9fe5-270f8e2f56df", 00:23:39.908 "is_configured": true, 00:23:39.908 "data_offset": 2048, 00:23:39.908 "data_size": 63488 00:23:39.908 } 00:23:39.908 ] 00:23:39.908 }' 00:23:39.908 16:39:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:39.908 16:39:16 -- common/autotest_common.sh@10 -- # set +x 00:23:40.475 16:39:17 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:40.475 16:39:17 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:40.733 [2024-07-11 16:39:17.329431] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@430 -- # '[' 13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e '!=' 13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e ']' 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:40.733 [2024-07-11 16:39:17.513640] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:40.733 16:39:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:40.734 16:39:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:40.734 16:39:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:40.734 16:39:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.734 16:39:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.993 16:39:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.993 "name": "raid_bdev1", 00:23:40.993 "uuid": "13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e", 00:23:40.993 "strip_size_kb": 64, 00:23:40.993 "state": "online", 00:23:40.993 "raid_level": "raid5f", 00:23:40.993 "superblock": true, 00:23:40.993 "num_base_bdevs": 3, 00:23:40.993 "num_base_bdevs_discovered": 2, 00:23:40.993 "num_base_bdevs_operational": 2, 00:23:40.993 "base_bdevs_list": [ 00:23:40.993 { 00:23:40.993 "name": null, 00:23:40.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.993 "is_configured": false, 00:23:40.993 "data_offset": 2048, 00:23:40.993 "data_size": 63488 00:23:40.993 }, 00:23:40.993 { 00:23:40.993 "name": "pt2", 00:23:40.993 "uuid": "4a6ba174-8d3a-550b-8d69-9ddf4f044a69", 00:23:40.993 "is_configured": true, 00:23:40.993 "data_offset": 2048, 00:23:40.993 "data_size": 63488 00:23:40.993 }, 00:23:40.993 { 00:23:40.993 "name": "pt3", 00:23:40.993 "uuid": "aeacae0a-d364-582b-9fe5-270f8e2f56df", 00:23:40.993 "is_configured": true, 00:23:40.993 "data_offset": 2048, 00:23:40.993 "data_size": 63488 00:23:40.993 } 00:23:40.993 ] 00:23:40.993 }' 00:23:40.993 16:39:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.993 16:39:17 -- common/autotest_common.sh@10 -- # set +x 00:23:41.930 16:39:18 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:41.930 [2024-07-11 16:39:18.622335] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:41.930 [2024-07-11 16:39:18.622371] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:41.930 [2024-07-11 16:39:18.622434] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:41.930 [2024-07-11 16:39:18.622494] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:41.930 [2024-07-11 16:39:18.622505] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:23:41.930 16:39:18 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.930 16:39:18 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:42.189 16:39:18 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:42.189 16:39:18 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:42.189 16:39:18 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:42.189 16:39:18 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:42.189 16:39:18 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:42.448 16:39:19 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:42.448 16:39:19 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:42.448 16:39:19 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:42.707 [2024-07-11 16:39:19.426441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:42.707 [2024-07-11 16:39:19.426521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:42.707 [2024-07-11 16:39:19.426556] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:42.707 [2024-07-11 16:39:19.426578] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:42.707 [2024-07-11 16:39:19.428841] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:42.707 [2024-07-11 16:39:19.428904] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:42.707 [2024-07-11 16:39:19.429057] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:42.707 [2024-07-11 16:39:19.429147] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:42.707 pt2 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.707 16:39:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.966 16:39:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:42.966 "name": "raid_bdev1", 00:23:42.966 "uuid": "13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e", 00:23:42.966 "strip_size_kb": 64, 00:23:42.966 "state": "configuring", 00:23:42.966 "raid_level": "raid5f", 00:23:42.966 "superblock": true, 00:23:42.966 "num_base_bdevs": 3, 00:23:42.966 "num_base_bdevs_discovered": 1, 00:23:42.966 "num_base_bdevs_operational": 2, 00:23:42.966 "base_bdevs_list": [ 00:23:42.966 { 00:23:42.966 "name": null, 00:23:42.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.966 "is_configured": false, 00:23:42.966 "data_offset": 2048, 00:23:42.966 "data_size": 63488 00:23:42.966 }, 00:23:42.966 { 00:23:42.966 "name": "pt2", 00:23:42.966 "uuid": "4a6ba174-8d3a-550b-8d69-9ddf4f044a69", 00:23:42.966 "is_configured": true, 00:23:42.966 "data_offset": 2048, 00:23:42.966 "data_size": 63488 00:23:42.966 }, 00:23:42.966 { 00:23:42.966 "name": null, 00:23:42.966 "uuid": "aeacae0a-d364-582b-9fe5-270f8e2f56df", 00:23:42.966 "is_configured": false, 00:23:42.966 "data_offset": 2048, 00:23:42.966 "data_size": 63488 00:23:42.966 } 00:23:42.966 ] 00:23:42.966 }' 00:23:42.966 16:39:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:42.966 16:39:19 -- common/autotest_common.sh@10 -- # set +x 00:23:43.547 16:39:20 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:43.547 16:39:20 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:43.547 16:39:20 -- bdev/bdev_raid.sh@462 -- # i=2 00:23:43.547 16:39:20 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:43.832 [2024-07-11 16:39:20.521513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:43.832 [2024-07-11 16:39:20.521606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.832 [2024-07-11 16:39:20.521646] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:43.832 [2024-07-11 16:39:20.521669] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.832 [2024-07-11 16:39:20.522206] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.832 [2024-07-11 16:39:20.522263] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:43.832 [2024-07-11 16:39:20.522412] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:43.832 [2024-07-11 16:39:20.522455] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:43.832 [2024-07-11 16:39:20.522595] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:23:43.832 [2024-07-11 16:39:20.522609] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:43.832 [2024-07-11 16:39:20.522699] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:43.832 [2024-07-11 16:39:20.526883] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:23:43.832 [2024-07-11 16:39:20.526908] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:23:43.832 [2024-07-11 16:39:20.527212] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.832 pt3 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.832 16:39:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.091 16:39:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:44.091 "name": "raid_bdev1", 00:23:44.091 "uuid": "13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e", 00:23:44.091 "strip_size_kb": 64, 00:23:44.091 "state": "online", 00:23:44.091 "raid_level": "raid5f", 00:23:44.091 "superblock": true, 00:23:44.091 "num_base_bdevs": 3, 00:23:44.091 "num_base_bdevs_discovered": 2, 00:23:44.091 "num_base_bdevs_operational": 2, 00:23:44.091 "base_bdevs_list": [ 00:23:44.091 { 00:23:44.091 "name": null, 00:23:44.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:44.091 "is_configured": false, 00:23:44.091 "data_offset": 2048, 00:23:44.091 "data_size": 63488 00:23:44.091 }, 00:23:44.091 { 00:23:44.091 "name": "pt2", 00:23:44.091 "uuid": "4a6ba174-8d3a-550b-8d69-9ddf4f044a69", 00:23:44.091 "is_configured": true, 00:23:44.091 "data_offset": 2048, 00:23:44.091 "data_size": 63488 00:23:44.091 }, 00:23:44.091 { 00:23:44.091 "name": "pt3", 00:23:44.091 "uuid": "aeacae0a-d364-582b-9fe5-270f8e2f56df", 00:23:44.091 "is_configured": true, 00:23:44.091 "data_offset": 2048, 00:23:44.091 "data_size": 63488 00:23:44.091 } 00:23:44.091 ] 00:23:44.091 }' 00:23:44.091 16:39:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:44.091 16:39:20 -- common/autotest_common.sh@10 -- # set +x 00:23:44.658 16:39:21 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:23:44.658 16:39:21 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:44.917 [2024-07-11 16:39:21.500090] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:44.917 [2024-07-11 16:39:21.500122] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:44.917 [2024-07-11 16:39:21.500201] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:44.917 [2024-07-11 16:39:21.500260] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:44.917 [2024-07-11 16:39:21.500271] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:23:44.917 16:39:21 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.917 16:39:21 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:44.917 16:39:21 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:44.917 16:39:21 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:44.917 16:39:21 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:45.176 [2024-07-11 16:39:21.924154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:45.176 [2024-07-11 16:39:21.924235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:45.176 [2024-07-11 16:39:21.924272] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:45.176 [2024-07-11 16:39:21.924297] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:45.176 [2024-07-11 16:39:21.926622] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:45.176 [2024-07-11 16:39:21.926671] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:45.176 [2024-07-11 16:39:21.926795] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:45.176 [2024-07-11 16:39:21.926853] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:45.176 pt1 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.176 16:39:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.435 16:39:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:45.435 "name": "raid_bdev1", 00:23:45.435 "uuid": "13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e", 00:23:45.435 "strip_size_kb": 64, 00:23:45.436 "state": "configuring", 00:23:45.436 "raid_level": "raid5f", 00:23:45.436 "superblock": true, 00:23:45.436 "num_base_bdevs": 3, 00:23:45.436 "num_base_bdevs_discovered": 1, 00:23:45.436 "num_base_bdevs_operational": 3, 00:23:45.436 "base_bdevs_list": [ 00:23:45.436 { 00:23:45.436 "name": "pt1", 00:23:45.436 "uuid": "7ca6f940-3df6-5f87-bdd1-344770a9f67f", 00:23:45.436 "is_configured": true, 00:23:45.436 "data_offset": 2048, 00:23:45.436 "data_size": 63488 00:23:45.436 }, 00:23:45.436 { 00:23:45.436 "name": null, 00:23:45.436 "uuid": "4a6ba174-8d3a-550b-8d69-9ddf4f044a69", 00:23:45.436 "is_configured": false, 00:23:45.436 "data_offset": 2048, 00:23:45.436 "data_size": 63488 00:23:45.436 }, 00:23:45.436 { 00:23:45.436 "name": null, 00:23:45.436 "uuid": "aeacae0a-d364-582b-9fe5-270f8e2f56df", 00:23:45.436 "is_configured": false, 00:23:45.436 "data_offset": 2048, 00:23:45.436 "data_size": 63488 00:23:45.436 } 00:23:45.436 ] 00:23:45.436 }' 00:23:45.436 16:39:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:45.436 16:39:22 -- common/autotest_common.sh@10 -- # set +x 00:23:46.003 16:39:22 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:46.003 16:39:22 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:46.003 16:39:22 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:46.262 16:39:22 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:46.262 16:39:22 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:46.262 16:39:22 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:46.520 16:39:23 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:46.520 16:39:23 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:46.520 16:39:23 -- bdev/bdev_raid.sh@489 -- # i=2 00:23:46.520 16:39:23 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:46.778 [2024-07-11 16:39:23.416411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:46.778 [2024-07-11 16:39:23.416497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:46.778 [2024-07-11 16:39:23.416530] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:46.778 [2024-07-11 16:39:23.416567] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:46.778 [2024-07-11 16:39:23.417113] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:46.778 [2024-07-11 16:39:23.417180] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:46.778 [2024-07-11 16:39:23.417283] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:46.778 [2024-07-11 16:39:23.417312] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:46.778 [2024-07-11 16:39:23.417319] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:46.778 [2024-07-11 16:39:23.417351] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:23:46.778 [2024-07-11 16:39:23.417447] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:46.778 pt3 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.778 16:39:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.036 16:39:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:47.036 "name": "raid_bdev1", 00:23:47.036 "uuid": "13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e", 00:23:47.036 "strip_size_kb": 64, 00:23:47.036 "state": "configuring", 00:23:47.036 "raid_level": "raid5f", 00:23:47.036 "superblock": true, 00:23:47.036 "num_base_bdevs": 3, 00:23:47.036 "num_base_bdevs_discovered": 1, 00:23:47.036 "num_base_bdevs_operational": 2, 00:23:47.036 "base_bdevs_list": [ 00:23:47.036 { 00:23:47.036 "name": null, 00:23:47.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:47.036 "is_configured": false, 00:23:47.036 "data_offset": 2048, 00:23:47.036 "data_size": 63488 00:23:47.036 }, 00:23:47.036 { 00:23:47.036 "name": null, 00:23:47.036 "uuid": "4a6ba174-8d3a-550b-8d69-9ddf4f044a69", 00:23:47.036 "is_configured": false, 00:23:47.036 "data_offset": 2048, 00:23:47.036 "data_size": 63488 00:23:47.036 }, 00:23:47.036 { 00:23:47.036 "name": "pt3", 00:23:47.036 "uuid": "aeacae0a-d364-582b-9fe5-270f8e2f56df", 00:23:47.036 "is_configured": true, 00:23:47.036 "data_offset": 2048, 00:23:47.036 "data_size": 63488 00:23:47.036 } 00:23:47.036 ] 00:23:47.036 }' 00:23:47.036 16:39:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:47.036 16:39:23 -- common/autotest_common.sh@10 -- # set +x 00:23:47.602 16:39:24 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:47.602 16:39:24 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:47.602 16:39:24 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:47.860 [2024-07-11 16:39:24.492601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:47.860 [2024-07-11 16:39:24.492663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:47.860 [2024-07-11 16:39:24.492689] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:47.860 [2024-07-11 16:39:24.492713] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:47.860 [2024-07-11 16:39:24.493149] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:47.860 [2024-07-11 16:39:24.493185] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:47.860 [2024-07-11 16:39:24.493263] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:47.860 [2024-07-11 16:39:24.493307] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:47.860 [2024-07-11 16:39:24.493423] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:23:47.860 [2024-07-11 16:39:24.493435] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:47.860 [2024-07-11 16:39:24.493519] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:47.860 [2024-07-11 16:39:24.497832] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:23:47.860 [2024-07-11 16:39:24.497856] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:23:47.860 pt2 00:23:47.860 [2024-07-11 16:39:24.498083] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.860 16:39:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.117 16:39:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:48.117 "name": "raid_bdev1", 00:23:48.117 "uuid": "13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e", 00:23:48.117 "strip_size_kb": 64, 00:23:48.117 "state": "online", 00:23:48.117 "raid_level": "raid5f", 00:23:48.117 "superblock": true, 00:23:48.117 "num_base_bdevs": 3, 00:23:48.117 "num_base_bdevs_discovered": 2, 00:23:48.117 "num_base_bdevs_operational": 2, 00:23:48.117 "base_bdevs_list": [ 00:23:48.117 { 00:23:48.117 "name": null, 00:23:48.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.117 "is_configured": false, 00:23:48.117 "data_offset": 2048, 00:23:48.117 "data_size": 63488 00:23:48.117 }, 00:23:48.117 { 00:23:48.117 "name": "pt2", 00:23:48.117 "uuid": "4a6ba174-8d3a-550b-8d69-9ddf4f044a69", 00:23:48.117 "is_configured": true, 00:23:48.117 "data_offset": 2048, 00:23:48.117 "data_size": 63488 00:23:48.117 }, 00:23:48.117 { 00:23:48.117 "name": "pt3", 00:23:48.117 "uuid": "aeacae0a-d364-582b-9fe5-270f8e2f56df", 00:23:48.117 "is_configured": true, 00:23:48.117 "data_offset": 2048, 00:23:48.117 "data_size": 63488 00:23:48.117 } 00:23:48.117 ] 00:23:48.117 }' 00:23:48.117 16:39:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:48.117 16:39:24 -- common/autotest_common.sh@10 -- # set +x 00:23:48.683 16:39:25 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:48.683 16:39:25 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:48.941 [2024-07-11 16:39:25.622912] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.941 16:39:25 -- bdev/bdev_raid.sh@506 -- # '[' 13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e '!=' 13c767ef-59d4-4e5d-80f5-9f5e6bd2da6e ']' 00:23:48.941 16:39:25 -- bdev/bdev_raid.sh@511 -- # killprocess 131092 00:23:48.941 16:39:25 -- common/autotest_common.sh@926 -- # '[' -z 131092 ']' 00:23:48.941 16:39:25 -- common/autotest_common.sh@930 -- # kill -0 131092 00:23:48.941 16:39:25 -- common/autotest_common.sh@931 -- # uname 00:23:48.941 16:39:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:48.941 16:39:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131092 00:23:48.941 killing process with pid 131092 00:23:48.942 16:39:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:48.942 16:39:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:48.942 16:39:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131092' 00:23:48.942 16:39:25 -- common/autotest_common.sh@945 -- # kill 131092 00:23:48.942 16:39:25 -- common/autotest_common.sh@950 -- # wait 131092 00:23:48.942 [2024-07-11 16:39:25.657222] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:48.942 [2024-07-11 16:39:25.657377] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.942 [2024-07-11 16:39:25.657461] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:48.942 [2024-07-11 16:39:25.657480] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:23:49.200 [2024-07-11 16:39:25.846814] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:50.135 ************************************ 00:23:50.135 END TEST raid5f_superblock_test 00:23:50.135 ************************************ 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:50.135 00:23:50.135 real 0m18.769s 00:23:50.135 user 0m34.852s 00:23:50.135 sys 0m2.013s 00:23:50.135 16:39:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.135 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:23:50.135 16:39:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:23:50.135 16:39:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:50.135 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:23:50.135 ************************************ 00:23:50.135 START TEST raid5f_rebuild_test 00:23:50.135 ************************************ 00:23:50.135 16:39:26 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 false false 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@544 -- # raid_pid=131732 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131732 /var/tmp/spdk-raid.sock 00:23:50.135 16:39:26 -- common/autotest_common.sh@819 -- # '[' -z 131732 ']' 00:23:50.135 16:39:26 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:50.135 16:39:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:50.135 16:39:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:50.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:50.135 16:39:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:50.135 16:39:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:50.135 16:39:26 -- common/autotest_common.sh@10 -- # set +x 00:23:50.135 [2024-07-11 16:39:26.862564] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:50.135 [2024-07-11 16:39:26.862730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131732 ] 00:23:50.135 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:50.135 Zero copy mechanism will not be used. 00:23:50.394 [2024-07-11 16:39:27.018632] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.652 [2024-07-11 16:39:27.234551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.652 [2024-07-11 16:39:27.404320] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:51.219 16:39:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:51.219 16:39:27 -- common/autotest_common.sh@852 -- # return 0 00:23:51.219 16:39:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:51.219 16:39:27 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:51.219 16:39:27 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:51.219 BaseBdev1 00:23:51.219 16:39:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:51.219 16:39:27 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:51.219 16:39:27 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:51.477 BaseBdev2 00:23:51.477 16:39:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:51.477 16:39:28 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:51.477 16:39:28 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:51.735 BaseBdev3 00:23:51.735 16:39:28 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:51.994 spare_malloc 00:23:51.994 16:39:28 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:52.253 spare_delay 00:23:52.253 16:39:28 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:52.253 [2024-07-11 16:39:28.993987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:52.253 [2024-07-11 16:39:28.994242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:52.253 [2024-07-11 16:39:28.994374] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:52.253 [2024-07-11 16:39:28.994504] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:52.253 [2024-07-11 16:39:28.996529] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:52.253 [2024-07-11 16:39:28.996690] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:52.253 spare 00:23:52.253 16:39:29 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:52.511 [2024-07-11 16:39:29.178339] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:52.512 [2024-07-11 16:39:29.180022] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:52.512 [2024-07-11 16:39:29.180182] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:52.512 [2024-07-11 16:39:29.180327] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:52.512 [2024-07-11 16:39:29.180384] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:52.512 [2024-07-11 16:39:29.180628] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:23:52.512 [2024-07-11 16:39:29.185086] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:52.512 [2024-07-11 16:39:29.185234] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:52.512 [2024-07-11 16:39:29.185589] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.512 16:39:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.771 16:39:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:52.771 "name": "raid_bdev1", 00:23:52.771 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:23:52.771 "strip_size_kb": 64, 00:23:52.771 "state": "online", 00:23:52.771 "raid_level": "raid5f", 00:23:52.771 "superblock": false, 00:23:52.771 "num_base_bdevs": 3, 00:23:52.771 "num_base_bdevs_discovered": 3, 00:23:52.771 "num_base_bdevs_operational": 3, 00:23:52.771 "base_bdevs_list": [ 00:23:52.771 { 00:23:52.771 "name": "BaseBdev1", 00:23:52.771 "uuid": "3c9433fd-ef43-4327-bac1-d7bf23e3d390", 00:23:52.771 "is_configured": true, 00:23:52.771 "data_offset": 0, 00:23:52.771 "data_size": 65536 00:23:52.771 }, 00:23:52.771 { 00:23:52.771 "name": "BaseBdev2", 00:23:52.771 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:23:52.771 "is_configured": true, 00:23:52.771 "data_offset": 0, 00:23:52.771 "data_size": 65536 00:23:52.771 }, 00:23:52.771 { 00:23:52.771 "name": "BaseBdev3", 00:23:52.771 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:23:52.771 "is_configured": true, 00:23:52.771 "data_offset": 0, 00:23:52.771 "data_size": 65536 00:23:52.771 } 00:23:52.771 ] 00:23:52.771 }' 00:23:52.771 16:39:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:52.771 16:39:29 -- common/autotest_common.sh@10 -- # set +x 00:23:53.338 16:39:30 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:53.338 16:39:30 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:53.596 [2024-07-11 16:39:30.274939] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:53.596 16:39:30 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:23:53.596 16:39:30 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.596 16:39:30 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:53.854 16:39:30 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:53.854 16:39:30 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:53.854 16:39:30 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:53.854 16:39:30 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:53.854 16:39:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:53.854 16:39:30 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:53.854 16:39:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:53.854 16:39:30 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:53.854 16:39:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:53.854 16:39:30 -- bdev/nbd_common.sh@12 -- # local i 00:23:53.854 16:39:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:53.854 16:39:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:53.854 16:39:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:54.114 [2024-07-11 16:39:30.662943] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:54.114 /dev/nbd0 00:23:54.114 16:39:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:54.114 16:39:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:54.114 16:39:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:23:54.114 16:39:30 -- common/autotest_common.sh@857 -- # local i 00:23:54.114 16:39:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:23:54.114 16:39:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:23:54.114 16:39:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:23:54.114 16:39:30 -- common/autotest_common.sh@861 -- # break 00:23:54.114 16:39:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:23:54.114 16:39:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:23:54.114 16:39:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:54.114 1+0 records in 00:23:54.114 1+0 records out 00:23:54.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312759 s, 13.1 MB/s 00:23:54.114 16:39:30 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:54.114 16:39:30 -- common/autotest_common.sh@874 -- # size=4096 00:23:54.114 16:39:30 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:54.114 16:39:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:23:54.114 16:39:30 -- common/autotest_common.sh@877 -- # return 0 00:23:54.114 16:39:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:54.114 16:39:30 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:54.114 16:39:30 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:54.114 16:39:30 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:54.114 16:39:30 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:54.114 16:39:30 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:23:54.372 512+0 records in 00:23:54.372 512+0 records out 00:23:54.372 67108864 bytes (67 MB, 64 MiB) copied, 0.356881 s, 188 MB/s 00:23:54.372 16:39:31 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:54.372 16:39:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:54.372 16:39:31 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:54.372 16:39:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:54.372 16:39:31 -- bdev/nbd_common.sh@51 -- # local i 00:23:54.372 16:39:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:54.372 16:39:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:54.630 [2024-07-11 16:39:31.286610] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@41 -- # break 00:23:54.630 16:39:31 -- bdev/nbd_common.sh@45 -- # return 0 00:23:54.630 16:39:31 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:54.888 [2024-07-11 16:39:31.632449] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.888 16:39:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.146 16:39:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:55.146 "name": "raid_bdev1", 00:23:55.146 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:23:55.146 "strip_size_kb": 64, 00:23:55.146 "state": "online", 00:23:55.146 "raid_level": "raid5f", 00:23:55.146 "superblock": false, 00:23:55.146 "num_base_bdevs": 3, 00:23:55.146 "num_base_bdevs_discovered": 2, 00:23:55.146 "num_base_bdevs_operational": 2, 00:23:55.146 "base_bdevs_list": [ 00:23:55.146 { 00:23:55.146 "name": null, 00:23:55.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.146 "is_configured": false, 00:23:55.146 "data_offset": 0, 00:23:55.146 "data_size": 65536 00:23:55.146 }, 00:23:55.146 { 00:23:55.146 "name": "BaseBdev2", 00:23:55.146 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:23:55.146 "is_configured": true, 00:23:55.146 "data_offset": 0, 00:23:55.146 "data_size": 65536 00:23:55.146 }, 00:23:55.146 { 00:23:55.146 "name": "BaseBdev3", 00:23:55.146 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:23:55.146 "is_configured": true, 00:23:55.146 "data_offset": 0, 00:23:55.146 "data_size": 65536 00:23:55.146 } 00:23:55.146 ] 00:23:55.146 }' 00:23:55.146 16:39:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:55.146 16:39:31 -- common/autotest_common.sh@10 -- # set +x 00:23:55.714 16:39:32 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:55.973 [2024-07-11 16:39:32.692658] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:55.973 [2024-07-11 16:39:32.692715] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:55.973 [2024-07-11 16:39:32.703313] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cfb0 00:23:55.973 [2024-07-11 16:39:32.708612] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:55.973 16:39:32 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:56.908 16:39:33 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:56.908 16:39:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:56.908 16:39:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:56.908 16:39:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:56.908 16:39:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.166 16:39:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.166 16:39:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.166 16:39:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.166 "name": "raid_bdev1", 00:23:57.166 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:23:57.166 "strip_size_kb": 64, 00:23:57.166 "state": "online", 00:23:57.166 "raid_level": "raid5f", 00:23:57.166 "superblock": false, 00:23:57.166 "num_base_bdevs": 3, 00:23:57.166 "num_base_bdevs_discovered": 3, 00:23:57.166 "num_base_bdevs_operational": 3, 00:23:57.166 "process": { 00:23:57.166 "type": "rebuild", 00:23:57.166 "target": "spare", 00:23:57.166 "progress": { 00:23:57.166 "blocks": 24576, 00:23:57.166 "percent": 18 00:23:57.166 } 00:23:57.166 }, 00:23:57.166 "base_bdevs_list": [ 00:23:57.166 { 00:23:57.166 "name": "spare", 00:23:57.166 "uuid": "395947db-a932-5c02-b8cb-fd49a7453426", 00:23:57.166 "is_configured": true, 00:23:57.166 "data_offset": 0, 00:23:57.166 "data_size": 65536 00:23:57.166 }, 00:23:57.166 { 00:23:57.166 "name": "BaseBdev2", 00:23:57.166 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:23:57.166 "is_configured": true, 00:23:57.166 "data_offset": 0, 00:23:57.166 "data_size": 65536 00:23:57.166 }, 00:23:57.166 { 00:23:57.166 "name": "BaseBdev3", 00:23:57.166 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:23:57.166 "is_configured": true, 00:23:57.166 "data_offset": 0, 00:23:57.166 "data_size": 65536 00:23:57.166 } 00:23:57.166 ] 00:23:57.166 }' 00:23:57.166 16:39:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.423 16:39:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:57.423 16:39:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.423 16:39:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:57.423 16:39:34 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:57.681 [2024-07-11 16:39:34.277659] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:57.681 [2024-07-11 16:39:34.320521] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:57.681 [2024-07-11 16:39:34.320607] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.681 16:39:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.940 16:39:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:57.940 "name": "raid_bdev1", 00:23:57.940 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:23:57.940 "strip_size_kb": 64, 00:23:57.940 "state": "online", 00:23:57.940 "raid_level": "raid5f", 00:23:57.940 "superblock": false, 00:23:57.940 "num_base_bdevs": 3, 00:23:57.940 "num_base_bdevs_discovered": 2, 00:23:57.940 "num_base_bdevs_operational": 2, 00:23:57.940 "base_bdevs_list": [ 00:23:57.940 { 00:23:57.940 "name": null, 00:23:57.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.940 "is_configured": false, 00:23:57.940 "data_offset": 0, 00:23:57.940 "data_size": 65536 00:23:57.940 }, 00:23:57.940 { 00:23:57.940 "name": "BaseBdev2", 00:23:57.940 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:23:57.940 "is_configured": true, 00:23:57.940 "data_offset": 0, 00:23:57.940 "data_size": 65536 00:23:57.940 }, 00:23:57.940 { 00:23:57.940 "name": "BaseBdev3", 00:23:57.940 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:23:57.940 "is_configured": true, 00:23:57.940 "data_offset": 0, 00:23:57.940 "data_size": 65536 00:23:57.940 } 00:23:57.940 ] 00:23:57.940 }' 00:23:57.940 16:39:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:57.940 16:39:34 -- common/autotest_common.sh@10 -- # set +x 00:23:58.541 16:39:35 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:58.541 16:39:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:58.541 16:39:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:58.541 16:39:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:58.541 16:39:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:58.541 16:39:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.541 16:39:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:58.816 16:39:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:58.816 "name": "raid_bdev1", 00:23:58.816 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:23:58.816 "strip_size_kb": 64, 00:23:58.816 "state": "online", 00:23:58.816 "raid_level": "raid5f", 00:23:58.816 "superblock": false, 00:23:58.816 "num_base_bdevs": 3, 00:23:58.816 "num_base_bdevs_discovered": 2, 00:23:58.816 "num_base_bdevs_operational": 2, 00:23:58.816 "base_bdevs_list": [ 00:23:58.816 { 00:23:58.816 "name": null, 00:23:58.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.816 "is_configured": false, 00:23:58.816 "data_offset": 0, 00:23:58.816 "data_size": 65536 00:23:58.816 }, 00:23:58.816 { 00:23:58.816 "name": "BaseBdev2", 00:23:58.816 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:23:58.816 "is_configured": true, 00:23:58.816 "data_offset": 0, 00:23:58.816 "data_size": 65536 00:23:58.816 }, 00:23:58.816 { 00:23:58.816 "name": "BaseBdev3", 00:23:58.816 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:23:58.816 "is_configured": true, 00:23:58.816 "data_offset": 0, 00:23:58.816 "data_size": 65536 00:23:58.816 } 00:23:58.816 ] 00:23:58.816 }' 00:23:58.816 16:39:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:58.816 16:39:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:58.816 16:39:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:58.816 16:39:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:58.816 16:39:35 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:59.074 [2024-07-11 16:39:35.720505] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:59.074 [2024-07-11 16:39:35.720546] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:59.074 [2024-07-11 16:39:35.731089] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d150 00:23:59.074 [2024-07-11 16:39:35.737023] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:59.074 16:39:35 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:00.007 16:39:36 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.007 16:39:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:00.007 16:39:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:00.007 16:39:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:00.007 16:39:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:00.007 16:39:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.007 16:39:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.265 16:39:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:00.265 "name": "raid_bdev1", 00:24:00.265 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:24:00.265 "strip_size_kb": 64, 00:24:00.265 "state": "online", 00:24:00.265 "raid_level": "raid5f", 00:24:00.265 "superblock": false, 00:24:00.265 "num_base_bdevs": 3, 00:24:00.265 "num_base_bdevs_discovered": 3, 00:24:00.265 "num_base_bdevs_operational": 3, 00:24:00.265 "process": { 00:24:00.265 "type": "rebuild", 00:24:00.265 "target": "spare", 00:24:00.265 "progress": { 00:24:00.265 "blocks": 24576, 00:24:00.265 "percent": 18 00:24:00.265 } 00:24:00.265 }, 00:24:00.265 "base_bdevs_list": [ 00:24:00.265 { 00:24:00.265 "name": "spare", 00:24:00.265 "uuid": "395947db-a932-5c02-b8cb-fd49a7453426", 00:24:00.265 "is_configured": true, 00:24:00.265 "data_offset": 0, 00:24:00.265 "data_size": 65536 00:24:00.265 }, 00:24:00.265 { 00:24:00.265 "name": "BaseBdev2", 00:24:00.265 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:24:00.265 "is_configured": true, 00:24:00.265 "data_offset": 0, 00:24:00.265 "data_size": 65536 00:24:00.265 }, 00:24:00.265 { 00:24:00.265 "name": "BaseBdev3", 00:24:00.265 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:24:00.265 "is_configured": true, 00:24:00.265 "data_offset": 0, 00:24:00.265 "data_size": 65536 00:24:00.265 } 00:24:00.265 ] 00:24:00.266 }' 00:24:00.266 16:39:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:00.266 16:39:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.266 16:39:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@657 -- # local timeout=593 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.523 16:39:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.781 16:39:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:00.781 "name": "raid_bdev1", 00:24:00.781 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:24:00.781 "strip_size_kb": 64, 00:24:00.781 "state": "online", 00:24:00.781 "raid_level": "raid5f", 00:24:00.781 "superblock": false, 00:24:00.781 "num_base_bdevs": 3, 00:24:00.781 "num_base_bdevs_discovered": 3, 00:24:00.781 "num_base_bdevs_operational": 3, 00:24:00.781 "process": { 00:24:00.781 "type": "rebuild", 00:24:00.781 "target": "spare", 00:24:00.781 "progress": { 00:24:00.781 "blocks": 30720, 00:24:00.781 "percent": 23 00:24:00.781 } 00:24:00.781 }, 00:24:00.781 "base_bdevs_list": [ 00:24:00.781 { 00:24:00.781 "name": "spare", 00:24:00.781 "uuid": "395947db-a932-5c02-b8cb-fd49a7453426", 00:24:00.781 "is_configured": true, 00:24:00.781 "data_offset": 0, 00:24:00.781 "data_size": 65536 00:24:00.781 }, 00:24:00.781 { 00:24:00.781 "name": "BaseBdev2", 00:24:00.781 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:24:00.781 "is_configured": true, 00:24:00.781 "data_offset": 0, 00:24:00.781 "data_size": 65536 00:24:00.781 }, 00:24:00.781 { 00:24:00.781 "name": "BaseBdev3", 00:24:00.781 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:24:00.781 "is_configured": true, 00:24:00.781 "data_offset": 0, 00:24:00.781 "data_size": 65536 00:24:00.781 } 00:24:00.781 ] 00:24:00.781 }' 00:24:00.781 16:39:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:00.781 16:39:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.781 16:39:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:00.781 16:39:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.781 16:39:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:01.716 16:39:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:01.716 16:39:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.716 16:39:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:01.716 16:39:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:01.716 16:39:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:01.716 16:39:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:01.716 16:39:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.716 16:39:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.975 16:39:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:01.975 "name": "raid_bdev1", 00:24:01.975 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:24:01.975 "strip_size_kb": 64, 00:24:01.975 "state": "online", 00:24:01.975 "raid_level": "raid5f", 00:24:01.975 "superblock": false, 00:24:01.975 "num_base_bdevs": 3, 00:24:01.975 "num_base_bdevs_discovered": 3, 00:24:01.975 "num_base_bdevs_operational": 3, 00:24:01.975 "process": { 00:24:01.975 "type": "rebuild", 00:24:01.975 "target": "spare", 00:24:01.975 "progress": { 00:24:01.975 "blocks": 59392, 00:24:01.975 "percent": 45 00:24:01.975 } 00:24:01.975 }, 00:24:01.975 "base_bdevs_list": [ 00:24:01.975 { 00:24:01.975 "name": "spare", 00:24:01.975 "uuid": "395947db-a932-5c02-b8cb-fd49a7453426", 00:24:01.975 "is_configured": true, 00:24:01.975 "data_offset": 0, 00:24:01.975 "data_size": 65536 00:24:01.975 }, 00:24:01.975 { 00:24:01.975 "name": "BaseBdev2", 00:24:01.975 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:24:01.975 "is_configured": true, 00:24:01.975 "data_offset": 0, 00:24:01.975 "data_size": 65536 00:24:01.975 }, 00:24:01.975 { 00:24:01.975 "name": "BaseBdev3", 00:24:01.975 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:24:01.975 "is_configured": true, 00:24:01.975 "data_offset": 0, 00:24:01.975 "data_size": 65536 00:24:01.975 } 00:24:01.975 ] 00:24:01.975 }' 00:24:01.975 16:39:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:01.975 16:39:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:01.975 16:39:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:02.234 16:39:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.234 16:39:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:03.170 16:39:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:03.170 16:39:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:03.170 16:39:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:03.170 16:39:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:03.170 16:39:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:03.170 16:39:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:03.170 16:39:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.170 16:39:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.429 16:39:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:03.430 "name": "raid_bdev1", 00:24:03.430 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:24:03.430 "strip_size_kb": 64, 00:24:03.430 "state": "online", 00:24:03.430 "raid_level": "raid5f", 00:24:03.430 "superblock": false, 00:24:03.430 "num_base_bdevs": 3, 00:24:03.430 "num_base_bdevs_discovered": 3, 00:24:03.430 "num_base_bdevs_operational": 3, 00:24:03.430 "process": { 00:24:03.430 "type": "rebuild", 00:24:03.430 "target": "spare", 00:24:03.430 "progress": { 00:24:03.430 "blocks": 86016, 00:24:03.430 "percent": 65 00:24:03.430 } 00:24:03.430 }, 00:24:03.430 "base_bdevs_list": [ 00:24:03.430 { 00:24:03.430 "name": "spare", 00:24:03.430 "uuid": "395947db-a932-5c02-b8cb-fd49a7453426", 00:24:03.430 "is_configured": true, 00:24:03.430 "data_offset": 0, 00:24:03.430 "data_size": 65536 00:24:03.430 }, 00:24:03.430 { 00:24:03.430 "name": "BaseBdev2", 00:24:03.430 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:24:03.430 "is_configured": true, 00:24:03.430 "data_offset": 0, 00:24:03.430 "data_size": 65536 00:24:03.430 }, 00:24:03.430 { 00:24:03.430 "name": "BaseBdev3", 00:24:03.430 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:24:03.430 "is_configured": true, 00:24:03.430 "data_offset": 0, 00:24:03.430 "data_size": 65536 00:24:03.430 } 00:24:03.430 ] 00:24:03.430 }' 00:24:03.430 16:39:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:03.430 16:39:40 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:03.430 16:39:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:03.430 16:39:40 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:03.430 16:39:40 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:04.366 16:39:41 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:04.366 16:39:41 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:04.366 16:39:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:04.366 16:39:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:04.366 16:39:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:04.366 16:39:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:04.366 16:39:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.366 16:39:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.625 16:39:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:04.625 "name": "raid_bdev1", 00:24:04.625 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:24:04.625 "strip_size_kb": 64, 00:24:04.625 "state": "online", 00:24:04.625 "raid_level": "raid5f", 00:24:04.625 "superblock": false, 00:24:04.625 "num_base_bdevs": 3, 00:24:04.625 "num_base_bdevs_discovered": 3, 00:24:04.625 "num_base_bdevs_operational": 3, 00:24:04.625 "process": { 00:24:04.625 "type": "rebuild", 00:24:04.625 "target": "spare", 00:24:04.625 "progress": { 00:24:04.625 "blocks": 112640, 00:24:04.625 "percent": 85 00:24:04.625 } 00:24:04.625 }, 00:24:04.625 "base_bdevs_list": [ 00:24:04.625 { 00:24:04.625 "name": "spare", 00:24:04.625 "uuid": "395947db-a932-5c02-b8cb-fd49a7453426", 00:24:04.625 "is_configured": true, 00:24:04.625 "data_offset": 0, 00:24:04.625 "data_size": 65536 00:24:04.625 }, 00:24:04.625 { 00:24:04.625 "name": "BaseBdev2", 00:24:04.625 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:24:04.626 "is_configured": true, 00:24:04.626 "data_offset": 0, 00:24:04.626 "data_size": 65536 00:24:04.626 }, 00:24:04.626 { 00:24:04.626 "name": "BaseBdev3", 00:24:04.626 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:24:04.626 "is_configured": true, 00:24:04.626 "data_offset": 0, 00:24:04.626 "data_size": 65536 00:24:04.626 } 00:24:04.626 ] 00:24:04.626 }' 00:24:04.626 16:39:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:04.626 16:39:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:04.626 16:39:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:04.885 16:39:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:04.885 16:39:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:05.453 [2024-07-11 16:39:42.183995] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:05.453 [2024-07-11 16:39:42.184060] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:05.453 [2024-07-11 16:39:42.184119] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:05.712 16:39:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:05.712 16:39:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.712 16:39:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.712 16:39:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.712 16:39:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.712 16:39:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.712 16:39:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.712 16:39:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.971 16:39:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:05.971 "name": "raid_bdev1", 00:24:05.971 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:24:05.971 "strip_size_kb": 64, 00:24:05.971 "state": "online", 00:24:05.971 "raid_level": "raid5f", 00:24:05.971 "superblock": false, 00:24:05.971 "num_base_bdevs": 3, 00:24:05.971 "num_base_bdevs_discovered": 3, 00:24:05.971 "num_base_bdevs_operational": 3, 00:24:05.971 "base_bdevs_list": [ 00:24:05.971 { 00:24:05.971 "name": "spare", 00:24:05.971 "uuid": "395947db-a932-5c02-b8cb-fd49a7453426", 00:24:05.971 "is_configured": true, 00:24:05.971 "data_offset": 0, 00:24:05.971 "data_size": 65536 00:24:05.971 }, 00:24:05.971 { 00:24:05.971 "name": "BaseBdev2", 00:24:05.971 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:24:05.971 "is_configured": true, 00:24:05.971 "data_offset": 0, 00:24:05.971 "data_size": 65536 00:24:05.971 }, 00:24:05.971 { 00:24:05.971 "name": "BaseBdev3", 00:24:05.971 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:24:05.971 "is_configured": true, 00:24:05.971 "data_offset": 0, 00:24:05.971 "data_size": 65536 00:24:05.971 } 00:24:05.971 ] 00:24:05.971 }' 00:24:05.971 16:39:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:05.971 16:39:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:05.971 16:39:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:06.230 16:39:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:06.230 16:39:42 -- bdev/bdev_raid.sh@660 -- # break 00:24:06.230 16:39:42 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:06.230 16:39:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:06.230 16:39:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:06.230 16:39:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:06.230 16:39:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:06.230 16:39:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.230 16:39:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.230 16:39:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:06.230 "name": "raid_bdev1", 00:24:06.231 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:24:06.231 "strip_size_kb": 64, 00:24:06.231 "state": "online", 00:24:06.231 "raid_level": "raid5f", 00:24:06.231 "superblock": false, 00:24:06.231 "num_base_bdevs": 3, 00:24:06.231 "num_base_bdevs_discovered": 3, 00:24:06.231 "num_base_bdevs_operational": 3, 00:24:06.231 "base_bdevs_list": [ 00:24:06.231 { 00:24:06.231 "name": "spare", 00:24:06.231 "uuid": "395947db-a932-5c02-b8cb-fd49a7453426", 00:24:06.231 "is_configured": true, 00:24:06.231 "data_offset": 0, 00:24:06.231 "data_size": 65536 00:24:06.231 }, 00:24:06.231 { 00:24:06.231 "name": "BaseBdev2", 00:24:06.231 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:24:06.231 "is_configured": true, 00:24:06.231 "data_offset": 0, 00:24:06.231 "data_size": 65536 00:24:06.231 }, 00:24:06.231 { 00:24:06.231 "name": "BaseBdev3", 00:24:06.231 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:24:06.231 "is_configured": true, 00:24:06.231 "data_offset": 0, 00:24:06.231 "data_size": 65536 00:24:06.231 } 00:24:06.231 ] 00:24:06.231 }' 00:24:06.231 16:39:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.489 16:39:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.746 16:39:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:06.746 "name": "raid_bdev1", 00:24:06.746 "uuid": "a34ded52-f319-4277-bc65-0c59a7ef17e3", 00:24:06.746 "strip_size_kb": 64, 00:24:06.746 "state": "online", 00:24:06.746 "raid_level": "raid5f", 00:24:06.746 "superblock": false, 00:24:06.746 "num_base_bdevs": 3, 00:24:06.746 "num_base_bdevs_discovered": 3, 00:24:06.746 "num_base_bdevs_operational": 3, 00:24:06.746 "base_bdevs_list": [ 00:24:06.746 { 00:24:06.746 "name": "spare", 00:24:06.746 "uuid": "395947db-a932-5c02-b8cb-fd49a7453426", 00:24:06.746 "is_configured": true, 00:24:06.746 "data_offset": 0, 00:24:06.746 "data_size": 65536 00:24:06.746 }, 00:24:06.746 { 00:24:06.746 "name": "BaseBdev2", 00:24:06.746 "uuid": "5f5bafc2-3ffe-455f-a811-3ab733887f66", 00:24:06.746 "is_configured": true, 00:24:06.746 "data_offset": 0, 00:24:06.746 "data_size": 65536 00:24:06.746 }, 00:24:06.746 { 00:24:06.746 "name": "BaseBdev3", 00:24:06.746 "uuid": "ba824bc3-9bf0-4420-84b6-c29ef4b3fa8a", 00:24:06.746 "is_configured": true, 00:24:06.746 "data_offset": 0, 00:24:06.746 "data_size": 65536 00:24:06.746 } 00:24:06.746 ] 00:24:06.746 }' 00:24:06.746 16:39:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:06.746 16:39:43 -- common/autotest_common.sh@10 -- # set +x 00:24:07.313 16:39:44 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:07.571 [2024-07-11 16:39:44.272685] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:07.571 [2024-07-11 16:39:44.272727] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:07.571 [2024-07-11 16:39:44.272805] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:07.571 [2024-07-11 16:39:44.272874] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:07.572 [2024-07-11 16:39:44.272885] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:24:07.572 16:39:44 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.572 16:39:44 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:07.831 16:39:44 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:07.831 16:39:44 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:07.831 16:39:44 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:07.831 16:39:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:07.831 16:39:44 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:07.831 16:39:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:07.831 16:39:44 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:07.831 16:39:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:07.831 16:39:44 -- bdev/nbd_common.sh@12 -- # local i 00:24:07.831 16:39:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:07.831 16:39:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:07.831 16:39:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:08.090 /dev/nbd0 00:24:08.090 16:39:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:08.090 16:39:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:08.090 16:39:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:08.090 16:39:44 -- common/autotest_common.sh@857 -- # local i 00:24:08.090 16:39:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:08.090 16:39:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:08.090 16:39:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:08.090 16:39:44 -- common/autotest_common.sh@861 -- # break 00:24:08.090 16:39:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:08.090 16:39:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:08.090 16:39:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:08.090 1+0 records in 00:24:08.090 1+0 records out 00:24:08.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502838 s, 8.1 MB/s 00:24:08.090 16:39:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:08.090 16:39:44 -- common/autotest_common.sh@874 -- # size=4096 00:24:08.090 16:39:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:08.090 16:39:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:08.090 16:39:44 -- common/autotest_common.sh@877 -- # return 0 00:24:08.090 16:39:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:08.090 16:39:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:08.090 16:39:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:08.349 /dev/nbd1 00:24:08.349 16:39:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:08.349 16:39:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:08.349 16:39:44 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:08.349 16:39:44 -- common/autotest_common.sh@857 -- # local i 00:24:08.349 16:39:44 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:08.349 16:39:44 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:08.349 16:39:44 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:08.349 16:39:44 -- common/autotest_common.sh@861 -- # break 00:24:08.349 16:39:44 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:08.349 16:39:44 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:08.349 16:39:44 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:08.349 1+0 records in 00:24:08.349 1+0 records out 00:24:08.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643861 s, 6.4 MB/s 00:24:08.349 16:39:44 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:08.349 16:39:44 -- common/autotest_common.sh@874 -- # size=4096 00:24:08.349 16:39:44 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:08.349 16:39:44 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:08.349 16:39:44 -- common/autotest_common.sh@877 -- # return 0 00:24:08.349 16:39:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:08.349 16:39:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:08.349 16:39:44 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:24:08.608 16:39:45 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:08.608 16:39:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:08.608 16:39:45 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:08.608 16:39:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:08.608 16:39:45 -- bdev/nbd_common.sh@51 -- # local i 00:24:08.608 16:39:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:08.608 16:39:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@41 -- # break 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@45 -- # return 0 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:08.867 16:39:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:09.126 16:39:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:09.126 16:39:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:09.126 16:39:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:09.126 16:39:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:09.126 16:39:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:09.126 16:39:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:09.126 16:39:45 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:09.386 16:39:45 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:09.386 16:39:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:09.386 16:39:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:09.386 16:39:45 -- bdev/nbd_common.sh@41 -- # break 00:24:09.386 16:39:45 -- bdev/nbd_common.sh@45 -- # return 0 00:24:09.386 16:39:45 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:24:09.386 16:39:45 -- bdev/bdev_raid.sh@709 -- # killprocess 131732 00:24:09.386 16:39:45 -- common/autotest_common.sh@926 -- # '[' -z 131732 ']' 00:24:09.386 16:39:45 -- common/autotest_common.sh@930 -- # kill -0 131732 00:24:09.386 16:39:45 -- common/autotest_common.sh@931 -- # uname 00:24:09.386 16:39:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:09.386 16:39:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 131732 00:24:09.386 killing process with pid 131732 00:24:09.386 Received shutdown signal, test time was about 60.000000 seconds 00:24:09.386 00:24:09.386 Latency(us) 00:24:09.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.386 =================================================================================================================== 00:24:09.386 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:09.386 16:39:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:09.386 16:39:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:09.386 16:39:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 131732' 00:24:09.386 16:39:45 -- common/autotest_common.sh@945 -- # kill 131732 00:24:09.386 16:39:45 -- common/autotest_common.sh@950 -- # wait 131732 00:24:09.386 [2024-07-11 16:39:45.994461] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:09.645 [2024-07-11 16:39:46.320651] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:10.582 ************************************ 00:24:10.582 END TEST raid5f_rebuild_test 00:24:10.582 ************************************ 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:10.582 00:24:10.582 real 0m20.427s 00:24:10.582 user 0m30.710s 00:24:10.582 sys 0m2.240s 00:24:10.582 16:39:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.582 16:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:24:10.582 16:39:47 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:24:10.582 16:39:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:10.582 16:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:10.582 ************************************ 00:24:10.582 START TEST raid5f_rebuild_test_sb 00:24:10.582 ************************************ 00:24:10.582 16:39:47 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 3 true false 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@544 -- # raid_pid=132319 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@545 -- # waitforlisten 132319 /var/tmp/spdk-raid.sock 00:24:10.582 16:39:47 -- common/autotest_common.sh@819 -- # '[' -z 132319 ']' 00:24:10.582 16:39:47 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:24:10.582 16:39:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:10.582 16:39:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:10.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:10.582 16:39:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:10.582 16:39:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:10.582 16:39:47 -- common/autotest_common.sh@10 -- # set +x 00:24:10.582 [2024-07-11 16:39:47.359823] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:10.582 [2024-07-11 16:39:47.360610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132319 ] 00:24:10.582 I/O size of 3145728 is greater than zero copy threshold (65536). 00:24:10.582 Zero copy mechanism will not be used. 00:24:10.841 [2024-07-11 16:39:47.522283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.101 [2024-07-11 16:39:47.677206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.101 [2024-07-11 16:39:47.839296] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:11.670 16:39:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:11.670 16:39:48 -- common/autotest_common.sh@852 -- # return 0 00:24:11.670 16:39:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:11.670 16:39:48 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:11.670 16:39:48 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:11.929 BaseBdev1_malloc 00:24:11.929 16:39:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:11.929 [2024-07-11 16:39:48.704115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:11.929 [2024-07-11 16:39:48.704213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.929 [2024-07-11 16:39:48.704245] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:11.929 [2024-07-11 16:39:48.704288] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.929 [2024-07-11 16:39:48.706307] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.929 [2024-07-11 16:39:48.706354] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:11.929 BaseBdev1 00:24:11.929 16:39:48 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:11.929 16:39:48 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:11.929 16:39:48 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:12.189 BaseBdev2_malloc 00:24:12.189 16:39:48 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:12.448 [2024-07-11 16:39:49.121673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:12.448 [2024-07-11 16:39:49.121761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.448 [2024-07-11 16:39:49.121801] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:12.448 [2024-07-11 16:39:49.121849] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.448 [2024-07-11 16:39:49.123743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.448 [2024-07-11 16:39:49.123786] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:12.448 BaseBdev2 00:24:12.448 16:39:49 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:12.448 16:39:49 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:12.448 16:39:49 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:12.706 BaseBdev3_malloc 00:24:12.706 16:39:49 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:12.964 [2024-07-11 16:39:49.534943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:12.964 [2024-07-11 16:39:49.535028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.964 [2024-07-11 16:39:49.535065] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:12.964 [2024-07-11 16:39:49.535102] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.964 [2024-07-11 16:39:49.537013] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.964 [2024-07-11 16:39:49.537082] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:12.964 BaseBdev3 00:24:12.964 16:39:49 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:12.964 spare_malloc 00:24:12.964 16:39:49 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:13.221 spare_delay 00:24:13.221 16:39:49 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:13.483 [2024-07-11 16:39:50.111453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:13.483 [2024-07-11 16:39:50.111544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.483 [2024-07-11 16:39:50.111576] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:24:13.483 [2024-07-11 16:39:50.111612] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.483 [2024-07-11 16:39:50.113575] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.483 [2024-07-11 16:39:50.113627] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:13.483 spare 00:24:13.483 16:39:50 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:24:13.748 [2024-07-11 16:39:50.359567] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.748 [2024-07-11 16:39:50.361343] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:13.748 [2024-07-11 16:39:50.361414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:13.748 [2024-07-11 16:39:50.361621] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:24:13.748 [2024-07-11 16:39:50.361636] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:13.748 [2024-07-11 16:39:50.361755] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:13.748 [2024-07-11 16:39:50.365934] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:24:13.748 [2024-07-11 16:39:50.365959] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:24:13.748 [2024-07-11 16:39:50.366108] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.748 16:39:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:14.006 16:39:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:14.006 "name": "raid_bdev1", 00:24:14.006 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:14.006 "strip_size_kb": 64, 00:24:14.006 "state": "online", 00:24:14.006 "raid_level": "raid5f", 00:24:14.006 "superblock": true, 00:24:14.006 "num_base_bdevs": 3, 00:24:14.006 "num_base_bdevs_discovered": 3, 00:24:14.006 "num_base_bdevs_operational": 3, 00:24:14.006 "base_bdevs_list": [ 00:24:14.006 { 00:24:14.006 "name": "BaseBdev1", 00:24:14.006 "uuid": "5fc659d5-a1a6-5414-a3e8-bd2f2ed73a75", 00:24:14.006 "is_configured": true, 00:24:14.006 "data_offset": 2048, 00:24:14.006 "data_size": 63488 00:24:14.006 }, 00:24:14.006 { 00:24:14.006 "name": "BaseBdev2", 00:24:14.006 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:14.006 "is_configured": true, 00:24:14.006 "data_offset": 2048, 00:24:14.006 "data_size": 63488 00:24:14.006 }, 00:24:14.006 { 00:24:14.006 "name": "BaseBdev3", 00:24:14.006 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:14.006 "is_configured": true, 00:24:14.006 "data_offset": 2048, 00:24:14.006 "data_size": 63488 00:24:14.006 } 00:24:14.006 ] 00:24:14.006 }' 00:24:14.006 16:39:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:14.006 16:39:50 -- common/autotest_common.sh@10 -- # set +x 00:24:14.573 16:39:51 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:14.573 16:39:51 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:14.830 [2024-07-11 16:39:51.382941] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:14.830 16:39:51 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:24:14.830 16:39:51 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.830 16:39:51 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:14.830 16:39:51 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:14.830 16:39:51 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:14.830 16:39:51 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:14.830 16:39:51 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:14.830 16:39:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:14.830 16:39:51 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:14.830 16:39:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:14.830 16:39:51 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:14.830 16:39:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:14.830 16:39:51 -- bdev/nbd_common.sh@12 -- # local i 00:24:14.830 16:39:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:14.830 16:39:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:14.830 16:39:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:15.088 [2024-07-11 16:39:51.754928] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:15.088 /dev/nbd0 00:24:15.088 16:39:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:15.088 16:39:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:15.088 16:39:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:15.088 16:39:51 -- common/autotest_common.sh@857 -- # local i 00:24:15.088 16:39:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:15.088 16:39:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:15.088 16:39:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:15.088 16:39:51 -- common/autotest_common.sh@861 -- # break 00:24:15.088 16:39:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:15.088 16:39:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:15.088 16:39:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:15.088 1+0 records in 00:24:15.088 1+0 records out 00:24:15.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352111 s, 11.6 MB/s 00:24:15.088 16:39:51 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:15.088 16:39:51 -- common/autotest_common.sh@874 -- # size=4096 00:24:15.088 16:39:51 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:15.088 16:39:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:15.088 16:39:51 -- common/autotest_common.sh@877 -- # return 0 00:24:15.088 16:39:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:15.088 16:39:51 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:15.088 16:39:51 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:15.088 16:39:51 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:24:15.088 16:39:51 -- bdev/bdev_raid.sh@582 -- # echo 128 00:24:15.088 16:39:51 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:24:15.346 496+0 records in 00:24:15.346 496+0 records out 00:24:15.346 65011712 bytes (65 MB, 62 MiB) copied, 0.308471 s, 211 MB/s 00:24:15.346 16:39:52 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:15.346 16:39:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:15.346 16:39:52 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:15.346 16:39:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:15.346 16:39:52 -- bdev/nbd_common.sh@51 -- # local i 00:24:15.346 16:39:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:15.346 16:39:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:15.604 16:39:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:15.604 16:39:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:15.604 16:39:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:15.604 16:39:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:15.604 16:39:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:15.604 16:39:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:15.604 16:39:52 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:15.604 [2024-07-11 16:39:52.384100] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:15.863 16:39:52 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:15.863 16:39:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:15.863 16:39:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:15.863 16:39:52 -- bdev/nbd_common.sh@41 -- # break 00:24:15.863 16:39:52 -- bdev/nbd_common.sh@45 -- # return 0 00:24:15.863 16:39:52 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:16.122 [2024-07-11 16:39:52.730237] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.122 16:39:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.383 16:39:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:16.383 "name": "raid_bdev1", 00:24:16.383 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:16.383 "strip_size_kb": 64, 00:24:16.383 "state": "online", 00:24:16.383 "raid_level": "raid5f", 00:24:16.383 "superblock": true, 00:24:16.383 "num_base_bdevs": 3, 00:24:16.383 "num_base_bdevs_discovered": 2, 00:24:16.383 "num_base_bdevs_operational": 2, 00:24:16.383 "base_bdevs_list": [ 00:24:16.383 { 00:24:16.383 "name": null, 00:24:16.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.383 "is_configured": false, 00:24:16.383 "data_offset": 2048, 00:24:16.383 "data_size": 63488 00:24:16.383 }, 00:24:16.383 { 00:24:16.383 "name": "BaseBdev2", 00:24:16.383 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:16.383 "is_configured": true, 00:24:16.383 "data_offset": 2048, 00:24:16.383 "data_size": 63488 00:24:16.383 }, 00:24:16.383 { 00:24:16.383 "name": "BaseBdev3", 00:24:16.383 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:16.383 "is_configured": true, 00:24:16.383 "data_offset": 2048, 00:24:16.383 "data_size": 63488 00:24:16.383 } 00:24:16.383 ] 00:24:16.383 }' 00:24:16.383 16:39:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:16.383 16:39:52 -- common/autotest_common.sh@10 -- # set +x 00:24:16.947 16:39:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:17.205 [2024-07-11 16:39:53.858470] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:17.205 [2024-07-11 16:39:53.858519] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:17.205 [2024-07-11 16:39:53.869188] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002acc0 00:24:17.205 [2024-07-11 16:39:53.874534] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:17.205 16:39:53 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:18.139 16:39:54 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:18.139 16:39:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:18.140 16:39:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:18.140 16:39:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:18.140 16:39:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:18.140 16:39:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.140 16:39:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.398 16:39:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:18.398 "name": "raid_bdev1", 00:24:18.398 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:18.398 "strip_size_kb": 64, 00:24:18.398 "state": "online", 00:24:18.398 "raid_level": "raid5f", 00:24:18.398 "superblock": true, 00:24:18.398 "num_base_bdevs": 3, 00:24:18.398 "num_base_bdevs_discovered": 3, 00:24:18.398 "num_base_bdevs_operational": 3, 00:24:18.398 "process": { 00:24:18.398 "type": "rebuild", 00:24:18.398 "target": "spare", 00:24:18.398 "progress": { 00:24:18.398 "blocks": 24576, 00:24:18.398 "percent": 19 00:24:18.398 } 00:24:18.398 }, 00:24:18.398 "base_bdevs_list": [ 00:24:18.398 { 00:24:18.398 "name": "spare", 00:24:18.398 "uuid": "45e21bc9-5678-5d2d-8ee6-e6164d024b42", 00:24:18.398 "is_configured": true, 00:24:18.398 "data_offset": 2048, 00:24:18.398 "data_size": 63488 00:24:18.398 }, 00:24:18.398 { 00:24:18.398 "name": "BaseBdev2", 00:24:18.398 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:18.398 "is_configured": true, 00:24:18.398 "data_offset": 2048, 00:24:18.398 "data_size": 63488 00:24:18.398 }, 00:24:18.398 { 00:24:18.398 "name": "BaseBdev3", 00:24:18.398 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:18.398 "is_configured": true, 00:24:18.398 "data_offset": 2048, 00:24:18.398 "data_size": 63488 00:24:18.398 } 00:24:18.398 ] 00:24:18.398 }' 00:24:18.398 16:39:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:18.398 16:39:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:18.398 16:39:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:18.656 16:39:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:18.656 16:39:55 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:18.656 [2024-07-11 16:39:55.435604] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:18.915 [2024-07-11 16:39:55.487609] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:18.915 [2024-07-11 16:39:55.487695] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.915 16:39:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:19.174 16:39:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:19.174 "name": "raid_bdev1", 00:24:19.174 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:19.174 "strip_size_kb": 64, 00:24:19.174 "state": "online", 00:24:19.174 "raid_level": "raid5f", 00:24:19.174 "superblock": true, 00:24:19.174 "num_base_bdevs": 3, 00:24:19.174 "num_base_bdevs_discovered": 2, 00:24:19.174 "num_base_bdevs_operational": 2, 00:24:19.174 "base_bdevs_list": [ 00:24:19.174 { 00:24:19.174 "name": null, 00:24:19.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:19.174 "is_configured": false, 00:24:19.174 "data_offset": 2048, 00:24:19.174 "data_size": 63488 00:24:19.174 }, 00:24:19.174 { 00:24:19.174 "name": "BaseBdev2", 00:24:19.174 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:19.174 "is_configured": true, 00:24:19.174 "data_offset": 2048, 00:24:19.174 "data_size": 63488 00:24:19.174 }, 00:24:19.174 { 00:24:19.174 "name": "BaseBdev3", 00:24:19.174 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:19.174 "is_configured": true, 00:24:19.174 "data_offset": 2048, 00:24:19.174 "data_size": 63488 00:24:19.174 } 00:24:19.174 ] 00:24:19.174 }' 00:24:19.174 16:39:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:19.174 16:39:55 -- common/autotest_common.sh@10 -- # set +x 00:24:19.741 16:39:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:19.741 16:39:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:19.741 16:39:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:19.741 16:39:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:19.741 16:39:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:19.741 16:39:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.741 16:39:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:20.001 16:39:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:20.001 "name": "raid_bdev1", 00:24:20.001 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:20.001 "strip_size_kb": 64, 00:24:20.001 "state": "online", 00:24:20.001 "raid_level": "raid5f", 00:24:20.001 "superblock": true, 00:24:20.001 "num_base_bdevs": 3, 00:24:20.001 "num_base_bdevs_discovered": 2, 00:24:20.001 "num_base_bdevs_operational": 2, 00:24:20.001 "base_bdevs_list": [ 00:24:20.001 { 00:24:20.001 "name": null, 00:24:20.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.001 "is_configured": false, 00:24:20.001 "data_offset": 2048, 00:24:20.001 "data_size": 63488 00:24:20.001 }, 00:24:20.001 { 00:24:20.001 "name": "BaseBdev2", 00:24:20.001 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:20.001 "is_configured": true, 00:24:20.001 "data_offset": 2048, 00:24:20.001 "data_size": 63488 00:24:20.001 }, 00:24:20.001 { 00:24:20.001 "name": "BaseBdev3", 00:24:20.001 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:20.001 "is_configured": true, 00:24:20.001 "data_offset": 2048, 00:24:20.001 "data_size": 63488 00:24:20.001 } 00:24:20.001 ] 00:24:20.001 }' 00:24:20.001 16:39:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:20.001 16:39:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:20.001 16:39:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:20.001 16:39:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:20.001 16:39:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:20.259 [2024-07-11 16:39:56.928656] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:20.259 [2024-07-11 16:39:56.928708] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:20.259 [2024-07-11 16:39:56.939234] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:24:20.259 [2024-07-11 16:39:56.944735] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:20.259 16:39:56 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:21.195 16:39:57 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:21.195 16:39:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:21.195 16:39:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:21.195 16:39:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:21.195 16:39:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:21.195 16:39:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.195 16:39:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.454 16:39:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:21.454 "name": "raid_bdev1", 00:24:21.454 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:21.454 "strip_size_kb": 64, 00:24:21.454 "state": "online", 00:24:21.454 "raid_level": "raid5f", 00:24:21.454 "superblock": true, 00:24:21.454 "num_base_bdevs": 3, 00:24:21.454 "num_base_bdevs_discovered": 3, 00:24:21.454 "num_base_bdevs_operational": 3, 00:24:21.454 "process": { 00:24:21.454 "type": "rebuild", 00:24:21.454 "target": "spare", 00:24:21.454 "progress": { 00:24:21.454 "blocks": 22528, 00:24:21.454 "percent": 17 00:24:21.454 } 00:24:21.454 }, 00:24:21.454 "base_bdevs_list": [ 00:24:21.454 { 00:24:21.454 "name": "spare", 00:24:21.454 "uuid": "45e21bc9-5678-5d2d-8ee6-e6164d024b42", 00:24:21.454 "is_configured": true, 00:24:21.454 "data_offset": 2048, 00:24:21.454 "data_size": 63488 00:24:21.454 }, 00:24:21.454 { 00:24:21.454 "name": "BaseBdev2", 00:24:21.454 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:21.454 "is_configured": true, 00:24:21.454 "data_offset": 2048, 00:24:21.454 "data_size": 63488 00:24:21.454 }, 00:24:21.454 { 00:24:21.454 "name": "BaseBdev3", 00:24:21.454 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:21.454 "is_configured": true, 00:24:21.454 "data_offset": 2048, 00:24:21.454 "data_size": 63488 00:24:21.454 } 00:24:21.454 ] 00:24:21.454 }' 00:24:21.454 16:39:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:21.454 16:39:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:21.454 16:39:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:21.713 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@657 -- # local timeout=614 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:21.713 "name": "raid_bdev1", 00:24:21.713 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:21.713 "strip_size_kb": 64, 00:24:21.713 "state": "online", 00:24:21.713 "raid_level": "raid5f", 00:24:21.713 "superblock": true, 00:24:21.713 "num_base_bdevs": 3, 00:24:21.713 "num_base_bdevs_discovered": 3, 00:24:21.713 "num_base_bdevs_operational": 3, 00:24:21.713 "process": { 00:24:21.713 "type": "rebuild", 00:24:21.713 "target": "spare", 00:24:21.713 "progress": { 00:24:21.713 "blocks": 30720, 00:24:21.713 "percent": 24 00:24:21.713 } 00:24:21.713 }, 00:24:21.713 "base_bdevs_list": [ 00:24:21.713 { 00:24:21.713 "name": "spare", 00:24:21.713 "uuid": "45e21bc9-5678-5d2d-8ee6-e6164d024b42", 00:24:21.713 "is_configured": true, 00:24:21.713 "data_offset": 2048, 00:24:21.713 "data_size": 63488 00:24:21.713 }, 00:24:21.713 { 00:24:21.713 "name": "BaseBdev2", 00:24:21.713 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:21.713 "is_configured": true, 00:24:21.713 "data_offset": 2048, 00:24:21.713 "data_size": 63488 00:24:21.713 }, 00:24:21.713 { 00:24:21.713 "name": "BaseBdev3", 00:24:21.713 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:21.713 "is_configured": true, 00:24:21.713 "data_offset": 2048, 00:24:21.713 "data_size": 63488 00:24:21.713 } 00:24:21.713 ] 00:24:21.713 }' 00:24:21.713 16:39:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:21.971 16:39:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:21.971 16:39:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:21.971 16:39:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:21.971 16:39:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:22.907 16:39:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:22.907 16:39:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:22.907 16:39:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:22.907 16:39:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:22.907 16:39:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:22.907 16:39:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:22.907 16:39:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.907 16:39:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.166 16:39:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:23.166 "name": "raid_bdev1", 00:24:23.166 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:23.166 "strip_size_kb": 64, 00:24:23.166 "state": "online", 00:24:23.166 "raid_level": "raid5f", 00:24:23.166 "superblock": true, 00:24:23.166 "num_base_bdevs": 3, 00:24:23.166 "num_base_bdevs_discovered": 3, 00:24:23.166 "num_base_bdevs_operational": 3, 00:24:23.166 "process": { 00:24:23.166 "type": "rebuild", 00:24:23.166 "target": "spare", 00:24:23.166 "progress": { 00:24:23.166 "blocks": 57344, 00:24:23.166 "percent": 45 00:24:23.166 } 00:24:23.166 }, 00:24:23.166 "base_bdevs_list": [ 00:24:23.166 { 00:24:23.166 "name": "spare", 00:24:23.166 "uuid": "45e21bc9-5678-5d2d-8ee6-e6164d024b42", 00:24:23.166 "is_configured": true, 00:24:23.166 "data_offset": 2048, 00:24:23.166 "data_size": 63488 00:24:23.166 }, 00:24:23.166 { 00:24:23.166 "name": "BaseBdev2", 00:24:23.166 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:23.166 "is_configured": true, 00:24:23.166 "data_offset": 2048, 00:24:23.166 "data_size": 63488 00:24:23.166 }, 00:24:23.166 { 00:24:23.166 "name": "BaseBdev3", 00:24:23.166 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:23.166 "is_configured": true, 00:24:23.166 "data_offset": 2048, 00:24:23.166 "data_size": 63488 00:24:23.166 } 00:24:23.166 ] 00:24:23.166 }' 00:24:23.166 16:39:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:23.166 16:39:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:23.166 16:39:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:23.166 16:39:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:23.166 16:39:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:24.104 16:40:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:24.104 16:40:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:24.104 16:40:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:24.104 16:40:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:24.104 16:40:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:24.104 16:40:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:24.363 16:40:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.363 16:40:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:24.363 16:40:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:24.363 "name": "raid_bdev1", 00:24:24.363 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:24.363 "strip_size_kb": 64, 00:24:24.363 "state": "online", 00:24:24.363 "raid_level": "raid5f", 00:24:24.363 "superblock": true, 00:24:24.363 "num_base_bdevs": 3, 00:24:24.363 "num_base_bdevs_discovered": 3, 00:24:24.363 "num_base_bdevs_operational": 3, 00:24:24.363 "process": { 00:24:24.363 "type": "rebuild", 00:24:24.363 "target": "spare", 00:24:24.363 "progress": { 00:24:24.363 "blocks": 83968, 00:24:24.363 "percent": 66 00:24:24.363 } 00:24:24.363 }, 00:24:24.363 "base_bdevs_list": [ 00:24:24.363 { 00:24:24.363 "name": "spare", 00:24:24.363 "uuid": "45e21bc9-5678-5d2d-8ee6-e6164d024b42", 00:24:24.363 "is_configured": true, 00:24:24.363 "data_offset": 2048, 00:24:24.363 "data_size": 63488 00:24:24.363 }, 00:24:24.363 { 00:24:24.363 "name": "BaseBdev2", 00:24:24.363 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:24.363 "is_configured": true, 00:24:24.363 "data_offset": 2048, 00:24:24.363 "data_size": 63488 00:24:24.363 }, 00:24:24.363 { 00:24:24.363 "name": "BaseBdev3", 00:24:24.363 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:24.363 "is_configured": true, 00:24:24.363 "data_offset": 2048, 00:24:24.363 "data_size": 63488 00:24:24.363 } 00:24:24.363 ] 00:24:24.363 }' 00:24:24.363 16:40:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:24.363 16:40:01 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:24.363 16:40:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:24.621 16:40:01 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:24.621 16:40:01 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:25.557 16:40:02 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:25.557 16:40:02 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:25.557 16:40:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:25.557 16:40:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:25.557 16:40:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:25.557 16:40:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:25.557 16:40:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.557 16:40:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:25.816 16:40:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:25.816 "name": "raid_bdev1", 00:24:25.816 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:25.816 "strip_size_kb": 64, 00:24:25.816 "state": "online", 00:24:25.816 "raid_level": "raid5f", 00:24:25.816 "superblock": true, 00:24:25.816 "num_base_bdevs": 3, 00:24:25.816 "num_base_bdevs_discovered": 3, 00:24:25.816 "num_base_bdevs_operational": 3, 00:24:25.816 "process": { 00:24:25.816 "type": "rebuild", 00:24:25.816 "target": "spare", 00:24:25.816 "progress": { 00:24:25.816 "blocks": 110592, 00:24:25.816 "percent": 87 00:24:25.816 } 00:24:25.816 }, 00:24:25.816 "base_bdevs_list": [ 00:24:25.816 { 00:24:25.816 "name": "spare", 00:24:25.816 "uuid": "45e21bc9-5678-5d2d-8ee6-e6164d024b42", 00:24:25.816 "is_configured": true, 00:24:25.816 "data_offset": 2048, 00:24:25.816 "data_size": 63488 00:24:25.816 }, 00:24:25.816 { 00:24:25.816 "name": "BaseBdev2", 00:24:25.816 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:25.816 "is_configured": true, 00:24:25.816 "data_offset": 2048, 00:24:25.816 "data_size": 63488 00:24:25.816 }, 00:24:25.816 { 00:24:25.816 "name": "BaseBdev3", 00:24:25.816 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:25.816 "is_configured": true, 00:24:25.816 "data_offset": 2048, 00:24:25.816 "data_size": 63488 00:24:25.816 } 00:24:25.816 ] 00:24:25.816 }' 00:24:25.816 16:40:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:25.816 16:40:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:25.816 16:40:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:25.816 16:40:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:25.816 16:40:02 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:26.751 [2024-07-11 16:40:03.194114] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:26.751 [2024-07-11 16:40:03.194180] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:26.751 [2024-07-11 16:40:03.194365] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:27.008 16:40:03 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:27.008 16:40:03 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:27.008 16:40:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:27.008 16:40:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:27.008 16:40:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:27.008 16:40:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:27.008 16:40:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.008 16:40:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.008 16:40:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:27.008 "name": "raid_bdev1", 00:24:27.008 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:27.008 "strip_size_kb": 64, 00:24:27.008 "state": "online", 00:24:27.008 "raid_level": "raid5f", 00:24:27.008 "superblock": true, 00:24:27.008 "num_base_bdevs": 3, 00:24:27.008 "num_base_bdevs_discovered": 3, 00:24:27.008 "num_base_bdevs_operational": 3, 00:24:27.008 "base_bdevs_list": [ 00:24:27.008 { 00:24:27.008 "name": "spare", 00:24:27.008 "uuid": "45e21bc9-5678-5d2d-8ee6-e6164d024b42", 00:24:27.008 "is_configured": true, 00:24:27.008 "data_offset": 2048, 00:24:27.008 "data_size": 63488 00:24:27.008 }, 00:24:27.008 { 00:24:27.008 "name": "BaseBdev2", 00:24:27.008 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:27.008 "is_configured": true, 00:24:27.008 "data_offset": 2048, 00:24:27.008 "data_size": 63488 00:24:27.008 }, 00:24:27.008 { 00:24:27.008 "name": "BaseBdev3", 00:24:27.008 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:27.008 "is_configured": true, 00:24:27.008 "data_offset": 2048, 00:24:27.008 "data_size": 63488 00:24:27.008 } 00:24:27.008 ] 00:24:27.008 }' 00:24:27.008 16:40:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:27.268 16:40:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:27.268 16:40:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:27.268 16:40:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:27.268 16:40:03 -- bdev/bdev_raid.sh@660 -- # break 00:24:27.268 16:40:03 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:27.268 16:40:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:27.268 16:40:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:27.268 16:40:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:27.268 16:40:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:27.268 16:40:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.268 16:40:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.555 16:40:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:27.556 "name": "raid_bdev1", 00:24:27.556 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:27.556 "strip_size_kb": 64, 00:24:27.556 "state": "online", 00:24:27.556 "raid_level": "raid5f", 00:24:27.556 "superblock": true, 00:24:27.556 "num_base_bdevs": 3, 00:24:27.556 "num_base_bdevs_discovered": 3, 00:24:27.556 "num_base_bdevs_operational": 3, 00:24:27.556 "base_bdevs_list": [ 00:24:27.556 { 00:24:27.556 "name": "spare", 00:24:27.556 "uuid": "45e21bc9-5678-5d2d-8ee6-e6164d024b42", 00:24:27.556 "is_configured": true, 00:24:27.556 "data_offset": 2048, 00:24:27.556 "data_size": 63488 00:24:27.556 }, 00:24:27.556 { 00:24:27.556 "name": "BaseBdev2", 00:24:27.556 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:27.556 "is_configured": true, 00:24:27.556 "data_offset": 2048, 00:24:27.556 "data_size": 63488 00:24:27.556 }, 00:24:27.556 { 00:24:27.556 "name": "BaseBdev3", 00:24:27.556 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:27.556 "is_configured": true, 00:24:27.556 "data_offset": 2048, 00:24:27.556 "data_size": 63488 00:24:27.556 } 00:24:27.556 ] 00:24:27.556 }' 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.556 16:40:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:27.820 16:40:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:27.820 "name": "raid_bdev1", 00:24:27.820 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:27.820 "strip_size_kb": 64, 00:24:27.820 "state": "online", 00:24:27.820 "raid_level": "raid5f", 00:24:27.820 "superblock": true, 00:24:27.820 "num_base_bdevs": 3, 00:24:27.820 "num_base_bdevs_discovered": 3, 00:24:27.820 "num_base_bdevs_operational": 3, 00:24:27.820 "base_bdevs_list": [ 00:24:27.820 { 00:24:27.820 "name": "spare", 00:24:27.820 "uuid": "45e21bc9-5678-5d2d-8ee6-e6164d024b42", 00:24:27.820 "is_configured": true, 00:24:27.820 "data_offset": 2048, 00:24:27.820 "data_size": 63488 00:24:27.820 }, 00:24:27.820 { 00:24:27.820 "name": "BaseBdev2", 00:24:27.820 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:27.820 "is_configured": true, 00:24:27.820 "data_offset": 2048, 00:24:27.820 "data_size": 63488 00:24:27.820 }, 00:24:27.820 { 00:24:27.820 "name": "BaseBdev3", 00:24:27.820 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:27.820 "is_configured": true, 00:24:27.820 "data_offset": 2048, 00:24:27.820 "data_size": 63488 00:24:27.820 } 00:24:27.820 ] 00:24:27.820 }' 00:24:27.820 16:40:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:27.820 16:40:04 -- common/autotest_common.sh@10 -- # set +x 00:24:28.386 16:40:05 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:28.645 [2024-07-11 16:40:05.328907] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:28.645 [2024-07-11 16:40:05.328966] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:28.645 [2024-07-11 16:40:05.329054] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:28.645 [2024-07-11 16:40:05.329140] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:28.645 [2024-07-11 16:40:05.329168] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:24:28.645 16:40:05 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.645 16:40:05 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:28.902 16:40:05 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:28.902 16:40:05 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:28.902 16:40:05 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:28.902 16:40:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:28.902 16:40:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:28.902 16:40:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:28.902 16:40:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:28.902 16:40:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:28.902 16:40:05 -- bdev/nbd_common.sh@12 -- # local i 00:24:28.902 16:40:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:28.902 16:40:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:28.902 16:40:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:29.160 /dev/nbd0 00:24:29.160 16:40:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:29.160 16:40:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:29.160 16:40:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:24:29.160 16:40:05 -- common/autotest_common.sh@857 -- # local i 00:24:29.160 16:40:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:29.160 16:40:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:29.160 16:40:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:24:29.160 16:40:05 -- common/autotest_common.sh@861 -- # break 00:24:29.160 16:40:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:29.160 16:40:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:29.160 16:40:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:29.160 1+0 records in 00:24:29.160 1+0 records out 00:24:29.160 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393008 s, 10.4 MB/s 00:24:29.160 16:40:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.160 16:40:05 -- common/autotest_common.sh@874 -- # size=4096 00:24:29.160 16:40:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.160 16:40:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:29.160 16:40:05 -- common/autotest_common.sh@877 -- # return 0 00:24:29.160 16:40:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:29.160 16:40:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:29.160 16:40:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:29.417 /dev/nbd1 00:24:29.417 16:40:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:29.417 16:40:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:29.417 16:40:06 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:24:29.417 16:40:06 -- common/autotest_common.sh@857 -- # local i 00:24:29.417 16:40:06 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:24:29.417 16:40:06 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:24:29.417 16:40:06 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:24:29.417 16:40:06 -- common/autotest_common.sh@861 -- # break 00:24:29.417 16:40:06 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:24:29.417 16:40:06 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:24:29.417 16:40:06 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:29.417 1+0 records in 00:24:29.417 1+0 records out 00:24:29.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382556 s, 10.7 MB/s 00:24:29.417 16:40:06 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.417 16:40:06 -- common/autotest_common.sh@874 -- # size=4096 00:24:29.417 16:40:06 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:29.417 16:40:06 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:24:29.417 16:40:06 -- common/autotest_common.sh@877 -- # return 0 00:24:29.417 16:40:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:29.417 16:40:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:29.417 16:40:06 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:29.675 16:40:06 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:29.675 16:40:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:29.675 16:40:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:29.675 16:40:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:29.675 16:40:06 -- bdev/nbd_common.sh@51 -- # local i 00:24:29.675 16:40:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.675 16:40:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@41 -- # break 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.932 16:40:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:30.190 16:40:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:30.190 16:40:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:30.190 16:40:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:30.190 16:40:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:30.190 16:40:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:30.190 16:40:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:30.190 16:40:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:30.447 16:40:07 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:30.447 16:40:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:30.447 16:40:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:30.447 16:40:07 -- bdev/nbd_common.sh@41 -- # break 00:24:30.447 16:40:07 -- bdev/nbd_common.sh@45 -- # return 0 00:24:30.447 16:40:07 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:30.447 16:40:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:30.447 16:40:07 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:30.447 16:40:07 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:30.705 16:40:07 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:30.705 [2024-07-11 16:40:07.510492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:30.705 [2024-07-11 16:40:07.510600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:30.705 [2024-07-11 16:40:07.510635] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:30.705 [2024-07-11 16:40:07.510661] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:30.705 [2024-07-11 16:40:07.512814] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:30.705 [2024-07-11 16:40:07.512898] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:30.705 [2024-07-11 16:40:07.513050] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:30.705 [2024-07-11 16:40:07.513135] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:30.963 BaseBdev1 00:24:30.963 16:40:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:30.963 16:40:07 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:30.963 16:40:07 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:31.221 16:40:07 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:31.221 [2024-07-11 16:40:07.938544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:31.221 [2024-07-11 16:40:07.938639] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.221 [2024-07-11 16:40:07.938691] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:31.221 [2024-07-11 16:40:07.938709] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.221 [2024-07-11 16:40:07.939151] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.221 [2024-07-11 16:40:07.939210] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:31.221 [2024-07-11 16:40:07.939331] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:31.221 [2024-07-11 16:40:07.939347] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:31.221 [2024-07-11 16:40:07.939354] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:31.221 [2024-07-11 16:40:07.939372] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state configuring 00:24:31.221 [2024-07-11 16:40:07.939433] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:31.221 BaseBdev2 00:24:31.221 16:40:07 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:31.221 16:40:07 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:31.221 16:40:07 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:31.480 16:40:08 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:31.739 [2024-07-11 16:40:08.302615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:31.739 [2024-07-11 16:40:08.302690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.739 [2024-07-11 16:40:08.302724] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:31.739 [2024-07-11 16:40:08.302741] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.739 [2024-07-11 16:40:08.303137] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.739 [2024-07-11 16:40:08.303195] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:31.739 [2024-07-11 16:40:08.303278] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:31.739 [2024-07-11 16:40:08.303303] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:31.739 BaseBdev3 00:24:31.739 16:40:08 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:31.739 16:40:08 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:31.998 [2024-07-11 16:40:08.668642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:31.998 [2024-07-11 16:40:08.668749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:31.998 [2024-07-11 16:40:08.668784] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:31.998 [2024-07-11 16:40:08.668809] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:31.998 [2024-07-11 16:40:08.669413] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:31.998 [2024-07-11 16:40:08.669504] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:31.998 [2024-07-11 16:40:08.669599] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:31.998 [2024-07-11 16:40:08.669633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:31.998 spare 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.998 16:40:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:31.998 [2024-07-11 16:40:08.769759] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b780 00:24:31.998 [2024-07-11 16:40:08.769783] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:31.998 [2024-07-11 16:40:08.769910] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004bb40 00:24:31.998 [2024-07-11 16:40:08.774277] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b780 00:24:31.998 [2024-07-11 16:40:08.774304] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b780 00:24:31.998 [2024-07-11 16:40:08.774522] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.258 16:40:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:32.258 "name": "raid_bdev1", 00:24:32.258 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:32.258 "strip_size_kb": 64, 00:24:32.258 "state": "online", 00:24:32.258 "raid_level": "raid5f", 00:24:32.258 "superblock": true, 00:24:32.258 "num_base_bdevs": 3, 00:24:32.258 "num_base_bdevs_discovered": 3, 00:24:32.258 "num_base_bdevs_operational": 3, 00:24:32.258 "base_bdevs_list": [ 00:24:32.258 { 00:24:32.258 "name": "spare", 00:24:32.258 "uuid": "45e21bc9-5678-5d2d-8ee6-e6164d024b42", 00:24:32.258 "is_configured": true, 00:24:32.258 "data_offset": 2048, 00:24:32.258 "data_size": 63488 00:24:32.258 }, 00:24:32.258 { 00:24:32.258 "name": "BaseBdev2", 00:24:32.258 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:32.258 "is_configured": true, 00:24:32.258 "data_offset": 2048, 00:24:32.258 "data_size": 63488 00:24:32.258 }, 00:24:32.258 { 00:24:32.258 "name": "BaseBdev3", 00:24:32.258 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:32.258 "is_configured": true, 00:24:32.258 "data_offset": 2048, 00:24:32.258 "data_size": 63488 00:24:32.258 } 00:24:32.258 ] 00:24:32.258 }' 00:24:32.258 16:40:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:32.258 16:40:08 -- common/autotest_common.sh@10 -- # set +x 00:24:32.827 16:40:09 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:32.827 16:40:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:32.827 16:40:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:32.827 16:40:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:32.827 16:40:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:32.827 16:40:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.827 16:40:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:33.086 16:40:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:33.086 "name": "raid_bdev1", 00:24:33.086 "uuid": "d78fbb83-eae2-4279-bfbb-76b33fa478a1", 00:24:33.086 "strip_size_kb": 64, 00:24:33.086 "state": "online", 00:24:33.086 "raid_level": "raid5f", 00:24:33.086 "superblock": true, 00:24:33.086 "num_base_bdevs": 3, 00:24:33.086 "num_base_bdevs_discovered": 3, 00:24:33.086 "num_base_bdevs_operational": 3, 00:24:33.086 "base_bdevs_list": [ 00:24:33.086 { 00:24:33.086 "name": "spare", 00:24:33.086 "uuid": "45e21bc9-5678-5d2d-8ee6-e6164d024b42", 00:24:33.086 "is_configured": true, 00:24:33.086 "data_offset": 2048, 00:24:33.086 "data_size": 63488 00:24:33.086 }, 00:24:33.086 { 00:24:33.086 "name": "BaseBdev2", 00:24:33.086 "uuid": "41c1ae9e-7a79-5bdb-8200-dc969fe7f935", 00:24:33.086 "is_configured": true, 00:24:33.086 "data_offset": 2048, 00:24:33.086 "data_size": 63488 00:24:33.086 }, 00:24:33.086 { 00:24:33.086 "name": "BaseBdev3", 00:24:33.086 "uuid": "4a7fe47c-129f-5e54-901c-e4309fb3653b", 00:24:33.086 "is_configured": true, 00:24:33.086 "data_offset": 2048, 00:24:33.086 "data_size": 63488 00:24:33.086 } 00:24:33.086 ] 00:24:33.086 }' 00:24:33.086 16:40:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:33.086 16:40:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:33.086 16:40:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:33.086 16:40:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:33.086 16:40:09 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.086 16:40:09 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:33.345 16:40:10 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:33.345 16:40:10 -- bdev/bdev_raid.sh@709 -- # killprocess 132319 00:24:33.345 16:40:10 -- common/autotest_common.sh@926 -- # '[' -z 132319 ']' 00:24:33.345 16:40:10 -- common/autotest_common.sh@930 -- # kill -0 132319 00:24:33.345 16:40:10 -- common/autotest_common.sh@931 -- # uname 00:24:33.345 16:40:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:33.345 16:40:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132319 00:24:33.345 killing process with pid 132319 00:24:33.345 Received shutdown signal, test time was about 60.000000 seconds 00:24:33.345 00:24:33.345 Latency(us) 00:24:33.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.345 =================================================================================================================== 00:24:33.345 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:33.345 16:40:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:33.345 16:40:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:33.345 16:40:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132319' 00:24:33.345 16:40:10 -- common/autotest_common.sh@945 -- # kill 132319 00:24:33.346 16:40:10 -- common/autotest_common.sh@950 -- # wait 132319 00:24:33.346 [2024-07-11 16:40:10.085032] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:33.346 [2024-07-11 16:40:10.085142] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:33.346 [2024-07-11 16:40:10.085242] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:33.346 [2024-07-11 16:40:10.085334] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state offline 00:24:33.604 [2024-07-11 16:40:10.338081] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:34.541 ************************************ 00:24:34.541 END TEST raid5f_rebuild_test_sb 00:24:34.541 ************************************ 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:34.541 00:24:34.541 real 0m23.954s 00:24:34.541 user 0m37.840s 00:24:34.541 sys 0m2.392s 00:24:34.541 16:40:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:34.541 16:40:11 -- common/autotest_common.sh@10 -- # set +x 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:24:34.541 16:40:11 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:34.541 16:40:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:34.541 16:40:11 -- common/autotest_common.sh@10 -- # set +x 00:24:34.541 ************************************ 00:24:34.541 START TEST raid5f_state_function_test 00:24:34.541 ************************************ 00:24:34.541 16:40:11 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 false 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=132988 00:24:34.541 Process raid pid: 132988 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 132988' 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:34.541 16:40:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 132988 /var/tmp/spdk-raid.sock 00:24:34.541 16:40:11 -- common/autotest_common.sh@819 -- # '[' -z 132988 ']' 00:24:34.541 16:40:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:34.541 16:40:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:34.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:34.541 16:40:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:34.541 16:40:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:34.541 16:40:11 -- common/autotest_common.sh@10 -- # set +x 00:24:34.801 [2024-07-11 16:40:11.371207] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:34.801 [2024-07-11 16:40:11.371396] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.801 [2024-07-11 16:40:11.537949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.060 [2024-07-11 16:40:11.697367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.060 [2024-07-11 16:40:11.861871] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:35.628 16:40:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:35.628 16:40:12 -- common/autotest_common.sh@852 -- # return 0 00:24:35.628 16:40:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:35.887 [2024-07-11 16:40:12.524148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:35.887 [2024-07-11 16:40:12.524229] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:35.887 [2024-07-11 16:40:12.524241] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:35.887 [2024-07-11 16:40:12.524261] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:35.887 [2024-07-11 16:40:12.524267] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:35.887 [2024-07-11 16:40:12.524298] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:35.887 [2024-07-11 16:40:12.524306] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:35.887 [2024-07-11 16:40:12.524324] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.887 16:40:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.146 16:40:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:36.146 "name": "Existed_Raid", 00:24:36.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.146 "strip_size_kb": 64, 00:24:36.146 "state": "configuring", 00:24:36.146 "raid_level": "raid5f", 00:24:36.146 "superblock": false, 00:24:36.146 "num_base_bdevs": 4, 00:24:36.146 "num_base_bdevs_discovered": 0, 00:24:36.146 "num_base_bdevs_operational": 4, 00:24:36.146 "base_bdevs_list": [ 00:24:36.146 { 00:24:36.146 "name": "BaseBdev1", 00:24:36.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.146 "is_configured": false, 00:24:36.146 "data_offset": 0, 00:24:36.146 "data_size": 0 00:24:36.146 }, 00:24:36.146 { 00:24:36.146 "name": "BaseBdev2", 00:24:36.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.146 "is_configured": false, 00:24:36.146 "data_offset": 0, 00:24:36.146 "data_size": 0 00:24:36.146 }, 00:24:36.146 { 00:24:36.146 "name": "BaseBdev3", 00:24:36.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.146 "is_configured": false, 00:24:36.146 "data_offset": 0, 00:24:36.146 "data_size": 0 00:24:36.146 }, 00:24:36.146 { 00:24:36.146 "name": "BaseBdev4", 00:24:36.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.146 "is_configured": false, 00:24:36.146 "data_offset": 0, 00:24:36.146 "data_size": 0 00:24:36.146 } 00:24:36.146 ] 00:24:36.146 }' 00:24:36.146 16:40:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:36.146 16:40:12 -- common/autotest_common.sh@10 -- # set +x 00:24:36.732 16:40:13 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:36.990 [2024-07-11 16:40:13.608233] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:36.990 [2024-07-11 16:40:13.608288] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:36.990 16:40:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:37.249 [2024-07-11 16:40:13.868303] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:37.249 [2024-07-11 16:40:13.868354] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:37.249 [2024-07-11 16:40:13.868381] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:37.249 [2024-07-11 16:40:13.868409] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:37.249 [2024-07-11 16:40:13.868417] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:37.249 [2024-07-11 16:40:13.868448] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:37.249 [2024-07-11 16:40:13.868455] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:37.249 [2024-07-11 16:40:13.868474] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:37.249 16:40:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:37.507 [2024-07-11 16:40:14.077605] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:37.507 BaseBdev1 00:24:37.507 16:40:14 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:37.507 16:40:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:37.507 16:40:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:37.507 16:40:14 -- common/autotest_common.sh@889 -- # local i 00:24:37.507 16:40:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:37.507 16:40:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:37.507 16:40:14 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:37.766 16:40:14 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:37.766 [ 00:24:37.766 { 00:24:37.766 "name": "BaseBdev1", 00:24:37.766 "aliases": [ 00:24:37.766 "59934795-9210-426c-844b-5c21c8e3a4df" 00:24:37.766 ], 00:24:37.766 "product_name": "Malloc disk", 00:24:37.766 "block_size": 512, 00:24:37.766 "num_blocks": 65536, 00:24:37.766 "uuid": "59934795-9210-426c-844b-5c21c8e3a4df", 00:24:37.766 "assigned_rate_limits": { 00:24:37.766 "rw_ios_per_sec": 0, 00:24:37.766 "rw_mbytes_per_sec": 0, 00:24:37.766 "r_mbytes_per_sec": 0, 00:24:37.766 "w_mbytes_per_sec": 0 00:24:37.766 }, 00:24:37.766 "claimed": true, 00:24:37.766 "claim_type": "exclusive_write", 00:24:37.766 "zoned": false, 00:24:37.766 "supported_io_types": { 00:24:37.766 "read": true, 00:24:37.766 "write": true, 00:24:37.766 "unmap": true, 00:24:37.766 "write_zeroes": true, 00:24:37.766 "flush": true, 00:24:37.766 "reset": true, 00:24:37.766 "compare": false, 00:24:37.766 "compare_and_write": false, 00:24:37.766 "abort": true, 00:24:37.766 "nvme_admin": false, 00:24:37.766 "nvme_io": false 00:24:37.766 }, 00:24:37.766 "memory_domains": [ 00:24:37.766 { 00:24:37.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.766 "dma_device_type": 2 00:24:37.766 } 00:24:37.766 ], 00:24:37.766 "driver_specific": {} 00:24:37.766 } 00:24:37.766 ] 00:24:37.766 16:40:14 -- common/autotest_common.sh@895 -- # return 0 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.766 16:40:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.026 16:40:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:38.026 "name": "Existed_Raid", 00:24:38.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.026 "strip_size_kb": 64, 00:24:38.026 "state": "configuring", 00:24:38.026 "raid_level": "raid5f", 00:24:38.026 "superblock": false, 00:24:38.026 "num_base_bdevs": 4, 00:24:38.026 "num_base_bdevs_discovered": 1, 00:24:38.026 "num_base_bdevs_operational": 4, 00:24:38.026 "base_bdevs_list": [ 00:24:38.026 { 00:24:38.026 "name": "BaseBdev1", 00:24:38.026 "uuid": "59934795-9210-426c-844b-5c21c8e3a4df", 00:24:38.026 "is_configured": true, 00:24:38.026 "data_offset": 0, 00:24:38.026 "data_size": 65536 00:24:38.026 }, 00:24:38.026 { 00:24:38.026 "name": "BaseBdev2", 00:24:38.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.026 "is_configured": false, 00:24:38.026 "data_offset": 0, 00:24:38.026 "data_size": 0 00:24:38.026 }, 00:24:38.026 { 00:24:38.026 "name": "BaseBdev3", 00:24:38.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.026 "is_configured": false, 00:24:38.026 "data_offset": 0, 00:24:38.026 "data_size": 0 00:24:38.026 }, 00:24:38.026 { 00:24:38.026 "name": "BaseBdev4", 00:24:38.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:38.026 "is_configured": false, 00:24:38.026 "data_offset": 0, 00:24:38.026 "data_size": 0 00:24:38.026 } 00:24:38.026 ] 00:24:38.026 }' 00:24:38.026 16:40:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:38.026 16:40:14 -- common/autotest_common.sh@10 -- # set +x 00:24:38.594 16:40:15 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:38.853 [2024-07-11 16:40:15.581877] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:38.853 [2024-07-11 16:40:15.581924] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:38.853 16:40:15 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:38.853 16:40:15 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:39.113 [2024-07-11 16:40:15.825943] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:39.113 [2024-07-11 16:40:15.827517] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:39.113 [2024-07-11 16:40:15.827589] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:39.113 [2024-07-11 16:40:15.827618] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:39.113 [2024-07-11 16:40:15.827638] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:39.113 [2024-07-11 16:40:15.827646] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:39.113 [2024-07-11 16:40:15.827660] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.113 16:40:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:39.372 16:40:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:39.372 "name": "Existed_Raid", 00:24:39.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.372 "strip_size_kb": 64, 00:24:39.372 "state": "configuring", 00:24:39.372 "raid_level": "raid5f", 00:24:39.372 "superblock": false, 00:24:39.372 "num_base_bdevs": 4, 00:24:39.372 "num_base_bdevs_discovered": 1, 00:24:39.372 "num_base_bdevs_operational": 4, 00:24:39.372 "base_bdevs_list": [ 00:24:39.372 { 00:24:39.372 "name": "BaseBdev1", 00:24:39.372 "uuid": "59934795-9210-426c-844b-5c21c8e3a4df", 00:24:39.372 "is_configured": true, 00:24:39.372 "data_offset": 0, 00:24:39.372 "data_size": 65536 00:24:39.372 }, 00:24:39.372 { 00:24:39.372 "name": "BaseBdev2", 00:24:39.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.372 "is_configured": false, 00:24:39.372 "data_offset": 0, 00:24:39.372 "data_size": 0 00:24:39.372 }, 00:24:39.372 { 00:24:39.372 "name": "BaseBdev3", 00:24:39.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.372 "is_configured": false, 00:24:39.372 "data_offset": 0, 00:24:39.372 "data_size": 0 00:24:39.372 }, 00:24:39.372 { 00:24:39.372 "name": "BaseBdev4", 00:24:39.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.372 "is_configured": false, 00:24:39.372 "data_offset": 0, 00:24:39.372 "data_size": 0 00:24:39.372 } 00:24:39.372 ] 00:24:39.372 }' 00:24:39.372 16:40:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:39.372 16:40:16 -- common/autotest_common.sh@10 -- # set +x 00:24:39.941 16:40:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:40.199 [2024-07-11 16:40:16.967272] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:40.199 BaseBdev2 00:24:40.199 16:40:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:40.199 16:40:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:40.199 16:40:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:40.199 16:40:16 -- common/autotest_common.sh@889 -- # local i 00:24:40.199 16:40:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:40.199 16:40:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:40.199 16:40:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:40.457 16:40:17 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:40.714 [ 00:24:40.714 { 00:24:40.714 "name": "BaseBdev2", 00:24:40.714 "aliases": [ 00:24:40.714 "f799b0b6-5799-494a-96e1-e645858bedf3" 00:24:40.715 ], 00:24:40.715 "product_name": "Malloc disk", 00:24:40.715 "block_size": 512, 00:24:40.715 "num_blocks": 65536, 00:24:40.715 "uuid": "f799b0b6-5799-494a-96e1-e645858bedf3", 00:24:40.715 "assigned_rate_limits": { 00:24:40.715 "rw_ios_per_sec": 0, 00:24:40.715 "rw_mbytes_per_sec": 0, 00:24:40.715 "r_mbytes_per_sec": 0, 00:24:40.715 "w_mbytes_per_sec": 0 00:24:40.715 }, 00:24:40.715 "claimed": true, 00:24:40.715 "claim_type": "exclusive_write", 00:24:40.715 "zoned": false, 00:24:40.715 "supported_io_types": { 00:24:40.715 "read": true, 00:24:40.715 "write": true, 00:24:40.715 "unmap": true, 00:24:40.715 "write_zeroes": true, 00:24:40.715 "flush": true, 00:24:40.715 "reset": true, 00:24:40.715 "compare": false, 00:24:40.715 "compare_and_write": false, 00:24:40.715 "abort": true, 00:24:40.715 "nvme_admin": false, 00:24:40.715 "nvme_io": false 00:24:40.715 }, 00:24:40.715 "memory_domains": [ 00:24:40.715 { 00:24:40.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.715 "dma_device_type": 2 00:24:40.715 } 00:24:40.715 ], 00:24:40.715 "driver_specific": {} 00:24:40.715 } 00:24:40.715 ] 00:24:40.715 16:40:17 -- common/autotest_common.sh@895 -- # return 0 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.715 16:40:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:41.010 16:40:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:41.010 "name": "Existed_Raid", 00:24:41.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.010 "strip_size_kb": 64, 00:24:41.010 "state": "configuring", 00:24:41.010 "raid_level": "raid5f", 00:24:41.010 "superblock": false, 00:24:41.010 "num_base_bdevs": 4, 00:24:41.010 "num_base_bdevs_discovered": 2, 00:24:41.010 "num_base_bdevs_operational": 4, 00:24:41.010 "base_bdevs_list": [ 00:24:41.010 { 00:24:41.010 "name": "BaseBdev1", 00:24:41.010 "uuid": "59934795-9210-426c-844b-5c21c8e3a4df", 00:24:41.010 "is_configured": true, 00:24:41.010 "data_offset": 0, 00:24:41.010 "data_size": 65536 00:24:41.010 }, 00:24:41.010 { 00:24:41.010 "name": "BaseBdev2", 00:24:41.010 "uuid": "f799b0b6-5799-494a-96e1-e645858bedf3", 00:24:41.010 "is_configured": true, 00:24:41.010 "data_offset": 0, 00:24:41.010 "data_size": 65536 00:24:41.010 }, 00:24:41.010 { 00:24:41.010 "name": "BaseBdev3", 00:24:41.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.010 "is_configured": false, 00:24:41.010 "data_offset": 0, 00:24:41.010 "data_size": 0 00:24:41.010 }, 00:24:41.010 { 00:24:41.010 "name": "BaseBdev4", 00:24:41.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.010 "is_configured": false, 00:24:41.010 "data_offset": 0, 00:24:41.010 "data_size": 0 00:24:41.010 } 00:24:41.010 ] 00:24:41.010 }' 00:24:41.010 16:40:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:41.010 16:40:17 -- common/autotest_common.sh@10 -- # set +x 00:24:41.598 16:40:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:41.857 [2024-07-11 16:40:18.478116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:41.857 BaseBdev3 00:24:41.857 16:40:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:41.857 16:40:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:41.857 16:40:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:41.857 16:40:18 -- common/autotest_common.sh@889 -- # local i 00:24:41.857 16:40:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:41.857 16:40:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:41.857 16:40:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:42.115 16:40:18 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:42.115 [ 00:24:42.115 { 00:24:42.115 "name": "BaseBdev3", 00:24:42.115 "aliases": [ 00:24:42.115 "986f939d-f750-44ed-8946-e3ac449a4bc4" 00:24:42.115 ], 00:24:42.115 "product_name": "Malloc disk", 00:24:42.115 "block_size": 512, 00:24:42.115 "num_blocks": 65536, 00:24:42.115 "uuid": "986f939d-f750-44ed-8946-e3ac449a4bc4", 00:24:42.115 "assigned_rate_limits": { 00:24:42.115 "rw_ios_per_sec": 0, 00:24:42.115 "rw_mbytes_per_sec": 0, 00:24:42.115 "r_mbytes_per_sec": 0, 00:24:42.115 "w_mbytes_per_sec": 0 00:24:42.115 }, 00:24:42.115 "claimed": true, 00:24:42.115 "claim_type": "exclusive_write", 00:24:42.115 "zoned": false, 00:24:42.115 "supported_io_types": { 00:24:42.115 "read": true, 00:24:42.115 "write": true, 00:24:42.115 "unmap": true, 00:24:42.115 "write_zeroes": true, 00:24:42.115 "flush": true, 00:24:42.115 "reset": true, 00:24:42.115 "compare": false, 00:24:42.115 "compare_and_write": false, 00:24:42.115 "abort": true, 00:24:42.115 "nvme_admin": false, 00:24:42.115 "nvme_io": false 00:24:42.115 }, 00:24:42.115 "memory_domains": [ 00:24:42.115 { 00:24:42.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.115 "dma_device_type": 2 00:24:42.115 } 00:24:42.115 ], 00:24:42.115 "driver_specific": {} 00:24:42.115 } 00:24:42.115 ] 00:24:42.115 16:40:18 -- common/autotest_common.sh@895 -- # return 0 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.115 16:40:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.373 16:40:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:42.373 "name": "Existed_Raid", 00:24:42.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.373 "strip_size_kb": 64, 00:24:42.373 "state": "configuring", 00:24:42.373 "raid_level": "raid5f", 00:24:42.373 "superblock": false, 00:24:42.373 "num_base_bdevs": 4, 00:24:42.373 "num_base_bdevs_discovered": 3, 00:24:42.373 "num_base_bdevs_operational": 4, 00:24:42.373 "base_bdevs_list": [ 00:24:42.373 { 00:24:42.373 "name": "BaseBdev1", 00:24:42.373 "uuid": "59934795-9210-426c-844b-5c21c8e3a4df", 00:24:42.373 "is_configured": true, 00:24:42.373 "data_offset": 0, 00:24:42.373 "data_size": 65536 00:24:42.373 }, 00:24:42.373 { 00:24:42.373 "name": "BaseBdev2", 00:24:42.373 "uuid": "f799b0b6-5799-494a-96e1-e645858bedf3", 00:24:42.373 "is_configured": true, 00:24:42.373 "data_offset": 0, 00:24:42.373 "data_size": 65536 00:24:42.373 }, 00:24:42.373 { 00:24:42.373 "name": "BaseBdev3", 00:24:42.373 "uuid": "986f939d-f750-44ed-8946-e3ac449a4bc4", 00:24:42.373 "is_configured": true, 00:24:42.373 "data_offset": 0, 00:24:42.373 "data_size": 65536 00:24:42.373 }, 00:24:42.373 { 00:24:42.373 "name": "BaseBdev4", 00:24:42.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:42.373 "is_configured": false, 00:24:42.373 "data_offset": 0, 00:24:42.373 "data_size": 0 00:24:42.373 } 00:24:42.373 ] 00:24:42.373 }' 00:24:42.373 16:40:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:42.373 16:40:19 -- common/autotest_common.sh@10 -- # set +x 00:24:43.308 16:40:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:43.308 [2024-07-11 16:40:19.980379] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:43.308 [2024-07-11 16:40:19.980431] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:24:43.308 [2024-07-11 16:40:19.980441] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:43.308 [2024-07-11 16:40:19.980549] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:24:43.308 [2024-07-11 16:40:19.986331] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:24:43.308 [2024-07-11 16:40:19.986358] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:24:43.308 [2024-07-11 16:40:19.986824] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:43.308 BaseBdev4 00:24:43.308 16:40:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:43.308 16:40:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:24:43.308 16:40:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:43.308 16:40:19 -- common/autotest_common.sh@889 -- # local i 00:24:43.308 16:40:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:43.308 16:40:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:43.308 16:40:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:43.567 16:40:20 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:43.567 [ 00:24:43.567 { 00:24:43.567 "name": "BaseBdev4", 00:24:43.567 "aliases": [ 00:24:43.567 "14de52b9-906e-4e1a-bc9b-747602dc3055" 00:24:43.567 ], 00:24:43.567 "product_name": "Malloc disk", 00:24:43.567 "block_size": 512, 00:24:43.567 "num_blocks": 65536, 00:24:43.567 "uuid": "14de52b9-906e-4e1a-bc9b-747602dc3055", 00:24:43.567 "assigned_rate_limits": { 00:24:43.567 "rw_ios_per_sec": 0, 00:24:43.567 "rw_mbytes_per_sec": 0, 00:24:43.567 "r_mbytes_per_sec": 0, 00:24:43.567 "w_mbytes_per_sec": 0 00:24:43.567 }, 00:24:43.567 "claimed": true, 00:24:43.567 "claim_type": "exclusive_write", 00:24:43.567 "zoned": false, 00:24:43.567 "supported_io_types": { 00:24:43.567 "read": true, 00:24:43.567 "write": true, 00:24:43.567 "unmap": true, 00:24:43.567 "write_zeroes": true, 00:24:43.567 "flush": true, 00:24:43.567 "reset": true, 00:24:43.567 "compare": false, 00:24:43.567 "compare_and_write": false, 00:24:43.567 "abort": true, 00:24:43.567 "nvme_admin": false, 00:24:43.567 "nvme_io": false 00:24:43.567 }, 00:24:43.567 "memory_domains": [ 00:24:43.567 { 00:24:43.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.567 "dma_device_type": 2 00:24:43.567 } 00:24:43.567 ], 00:24:43.567 "driver_specific": {} 00:24:43.567 } 00:24:43.567 ] 00:24:43.567 16:40:20 -- common/autotest_common.sh@895 -- # return 0 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.567 16:40:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.825 16:40:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:43.825 "name": "Existed_Raid", 00:24:43.825 "uuid": "8b81a2b1-f2fe-4a8c-ac04-9fadc148bec3", 00:24:43.825 "strip_size_kb": 64, 00:24:43.825 "state": "online", 00:24:43.825 "raid_level": "raid5f", 00:24:43.825 "superblock": false, 00:24:43.825 "num_base_bdevs": 4, 00:24:43.825 "num_base_bdevs_discovered": 4, 00:24:43.825 "num_base_bdevs_operational": 4, 00:24:43.825 "base_bdevs_list": [ 00:24:43.825 { 00:24:43.825 "name": "BaseBdev1", 00:24:43.825 "uuid": "59934795-9210-426c-844b-5c21c8e3a4df", 00:24:43.825 "is_configured": true, 00:24:43.825 "data_offset": 0, 00:24:43.825 "data_size": 65536 00:24:43.825 }, 00:24:43.825 { 00:24:43.825 "name": "BaseBdev2", 00:24:43.825 "uuid": "f799b0b6-5799-494a-96e1-e645858bedf3", 00:24:43.825 "is_configured": true, 00:24:43.825 "data_offset": 0, 00:24:43.825 "data_size": 65536 00:24:43.825 }, 00:24:43.825 { 00:24:43.825 "name": "BaseBdev3", 00:24:43.825 "uuid": "986f939d-f750-44ed-8946-e3ac449a4bc4", 00:24:43.825 "is_configured": true, 00:24:43.825 "data_offset": 0, 00:24:43.825 "data_size": 65536 00:24:43.825 }, 00:24:43.825 { 00:24:43.825 "name": "BaseBdev4", 00:24:43.825 "uuid": "14de52b9-906e-4e1a-bc9b-747602dc3055", 00:24:43.825 "is_configured": true, 00:24:43.825 "data_offset": 0, 00:24:43.825 "data_size": 65536 00:24:43.825 } 00:24:43.825 ] 00:24:43.825 }' 00:24:43.825 16:40:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:43.825 16:40:20 -- common/autotest_common.sh@10 -- # set +x 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:44.760 [2024-07-11 16:40:21.473597] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:44.760 16:40:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:44.761 16:40:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.761 16:40:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.019 16:40:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:45.019 "name": "Existed_Raid", 00:24:45.019 "uuid": "8b81a2b1-f2fe-4a8c-ac04-9fadc148bec3", 00:24:45.019 "strip_size_kb": 64, 00:24:45.019 "state": "online", 00:24:45.019 "raid_level": "raid5f", 00:24:45.019 "superblock": false, 00:24:45.019 "num_base_bdevs": 4, 00:24:45.019 "num_base_bdevs_discovered": 3, 00:24:45.019 "num_base_bdevs_operational": 3, 00:24:45.019 "base_bdevs_list": [ 00:24:45.019 { 00:24:45.019 "name": null, 00:24:45.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.019 "is_configured": false, 00:24:45.019 "data_offset": 0, 00:24:45.019 "data_size": 65536 00:24:45.019 }, 00:24:45.019 { 00:24:45.019 "name": "BaseBdev2", 00:24:45.019 "uuid": "f799b0b6-5799-494a-96e1-e645858bedf3", 00:24:45.019 "is_configured": true, 00:24:45.019 "data_offset": 0, 00:24:45.019 "data_size": 65536 00:24:45.019 }, 00:24:45.019 { 00:24:45.019 "name": "BaseBdev3", 00:24:45.019 "uuid": "986f939d-f750-44ed-8946-e3ac449a4bc4", 00:24:45.019 "is_configured": true, 00:24:45.019 "data_offset": 0, 00:24:45.019 "data_size": 65536 00:24:45.019 }, 00:24:45.019 { 00:24:45.019 "name": "BaseBdev4", 00:24:45.019 "uuid": "14de52b9-906e-4e1a-bc9b-747602dc3055", 00:24:45.019 "is_configured": true, 00:24:45.019 "data_offset": 0, 00:24:45.019 "data_size": 65536 00:24:45.019 } 00:24:45.019 ] 00:24:45.019 }' 00:24:45.019 16:40:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:45.019 16:40:21 -- common/autotest_common.sh@10 -- # set +x 00:24:45.955 16:40:22 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:45.955 16:40:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:45.955 16:40:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.955 16:40:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:45.955 16:40:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:45.955 16:40:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:45.955 16:40:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:46.214 [2024-07-11 16:40:22.881503] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:46.214 [2024-07-11 16:40:22.881536] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:46.214 [2024-07-11 16:40:22.881620] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.214 16:40:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:46.214 16:40:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:46.214 16:40:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.214 16:40:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:46.472 16:40:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:46.472 16:40:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:46.472 16:40:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:46.729 [2024-07-11 16:40:23.313578] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:46.729 16:40:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:46.729 16:40:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:46.729 16:40:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.729 16:40:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:46.987 16:40:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:46.987 16:40:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:46.987 16:40:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:46.987 [2024-07-11 16:40:23.785378] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:46.987 [2024-07-11 16:40:23.785439] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:24:47.245 16:40:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:47.245 16:40:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:47.245 16:40:23 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.245 16:40:23 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:47.245 16:40:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:47.245 16:40:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:47.245 16:40:24 -- bdev/bdev_raid.sh@287 -- # killprocess 132988 00:24:47.245 16:40:24 -- common/autotest_common.sh@926 -- # '[' -z 132988 ']' 00:24:47.245 16:40:24 -- common/autotest_common.sh@930 -- # kill -0 132988 00:24:47.245 16:40:24 -- common/autotest_common.sh@931 -- # uname 00:24:47.245 16:40:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:47.245 16:40:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 132988 00:24:47.503 16:40:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:47.503 16:40:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:47.503 16:40:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 132988' 00:24:47.503 killing process with pid 132988 00:24:47.503 16:40:24 -- common/autotest_common.sh@945 -- # kill 132988 00:24:47.503 16:40:24 -- common/autotest_common.sh@950 -- # wait 132988 00:24:47.503 [2024-07-11 16:40:24.062816] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:47.503 [2024-07-11 16:40:24.062932] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:48.438 ************************************ 00:24:48.438 END TEST raid5f_state_function_test 00:24:48.438 ************************************ 00:24:48.438 16:40:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:48.438 00:24:48.438 real 0m13.899s 00:24:48.438 user 0m24.903s 00:24:48.438 sys 0m1.483s 00:24:48.438 16:40:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:48.438 16:40:25 -- common/autotest_common.sh@10 -- # set +x 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:24:48.696 16:40:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:24:48.696 16:40:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:48.696 16:40:25 -- common/autotest_common.sh@10 -- # set +x 00:24:48.696 ************************************ 00:24:48.696 START TEST raid5f_state_function_test_sb 00:24:48.696 ************************************ 00:24:48.696 16:40:25 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid5f 4 true 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=133443 00:24:48.696 Process raid pid: 133443 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 133443' 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 133443 /var/tmp/spdk-raid.sock 00:24:48.696 16:40:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:48.696 16:40:25 -- common/autotest_common.sh@819 -- # '[' -z 133443 ']' 00:24:48.697 16:40:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:48.697 16:40:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:48.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:48.697 16:40:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:48.697 16:40:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:48.697 16:40:25 -- common/autotest_common.sh@10 -- # set +x 00:24:48.697 [2024-07-11 16:40:25.328478] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:48.697 [2024-07-11 16:40:25.328669] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.697 [2024-07-11 16:40:25.495201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.956 [2024-07-11 16:40:25.763214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.215 [2024-07-11 16:40:25.985275] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.783 16:40:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:49.783 16:40:26 -- common/autotest_common.sh@852 -- # return 0 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:49.783 [2024-07-11 16:40:26.544805] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:49.783 [2024-07-11 16:40:26.545085] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:49.783 [2024-07-11 16:40:26.545231] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:49.783 [2024-07-11 16:40:26.545422] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:49.783 [2024-07-11 16:40:26.545527] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:49.783 [2024-07-11 16:40:26.545665] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:49.783 [2024-07-11 16:40:26.545766] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:49.783 [2024-07-11 16:40:26.545899] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.783 16:40:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:50.042 16:40:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:50.042 "name": "Existed_Raid", 00:24:50.042 "uuid": "73953ebf-de01-4eed-9c21-eb493b86f1a2", 00:24:50.042 "strip_size_kb": 64, 00:24:50.042 "state": "configuring", 00:24:50.042 "raid_level": "raid5f", 00:24:50.042 "superblock": true, 00:24:50.042 "num_base_bdevs": 4, 00:24:50.042 "num_base_bdevs_discovered": 0, 00:24:50.042 "num_base_bdevs_operational": 4, 00:24:50.042 "base_bdevs_list": [ 00:24:50.042 { 00:24:50.042 "name": "BaseBdev1", 00:24:50.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.042 "is_configured": false, 00:24:50.042 "data_offset": 0, 00:24:50.042 "data_size": 0 00:24:50.042 }, 00:24:50.042 { 00:24:50.042 "name": "BaseBdev2", 00:24:50.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.042 "is_configured": false, 00:24:50.042 "data_offset": 0, 00:24:50.042 "data_size": 0 00:24:50.042 }, 00:24:50.042 { 00:24:50.042 "name": "BaseBdev3", 00:24:50.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.042 "is_configured": false, 00:24:50.042 "data_offset": 0, 00:24:50.042 "data_size": 0 00:24:50.042 }, 00:24:50.042 { 00:24:50.042 "name": "BaseBdev4", 00:24:50.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.042 "is_configured": false, 00:24:50.042 "data_offset": 0, 00:24:50.042 "data_size": 0 00:24:50.042 } 00:24:50.042 ] 00:24:50.042 }' 00:24:50.042 16:40:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:50.042 16:40:26 -- common/autotest_common.sh@10 -- # set +x 00:24:50.979 16:40:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:50.979 [2024-07-11 16:40:27.696947] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:50.979 [2024-07-11 16:40:27.697175] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:50.979 16:40:27 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:51.238 [2024-07-11 16:40:27.961081] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:51.238 [2024-07-11 16:40:27.961387] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:51.238 [2024-07-11 16:40:27.961510] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:51.238 [2024-07-11 16:40:27.961592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:51.238 [2024-07-11 16:40:27.961789] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:51.238 [2024-07-11 16:40:27.961868] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:51.238 [2024-07-11 16:40:27.962041] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:51.238 [2024-07-11 16:40:27.962097] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:51.238 16:40:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:51.497 [2024-07-11 16:40:28.227615] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:51.497 BaseBdev1 00:24:51.497 16:40:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:51.497 16:40:28 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:51.497 16:40:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:51.497 16:40:28 -- common/autotest_common.sh@889 -- # local i 00:24:51.497 16:40:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:51.497 16:40:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:51.497 16:40:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:51.756 16:40:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:52.015 [ 00:24:52.015 { 00:24:52.015 "name": "BaseBdev1", 00:24:52.015 "aliases": [ 00:24:52.015 "d660b8eb-b3da-47ff-821d-4b1bacba3346" 00:24:52.015 ], 00:24:52.015 "product_name": "Malloc disk", 00:24:52.015 "block_size": 512, 00:24:52.015 "num_blocks": 65536, 00:24:52.015 "uuid": "d660b8eb-b3da-47ff-821d-4b1bacba3346", 00:24:52.015 "assigned_rate_limits": { 00:24:52.015 "rw_ios_per_sec": 0, 00:24:52.015 "rw_mbytes_per_sec": 0, 00:24:52.015 "r_mbytes_per_sec": 0, 00:24:52.015 "w_mbytes_per_sec": 0 00:24:52.015 }, 00:24:52.015 "claimed": true, 00:24:52.015 "claim_type": "exclusive_write", 00:24:52.015 "zoned": false, 00:24:52.015 "supported_io_types": { 00:24:52.015 "read": true, 00:24:52.015 "write": true, 00:24:52.015 "unmap": true, 00:24:52.015 "write_zeroes": true, 00:24:52.015 "flush": true, 00:24:52.015 "reset": true, 00:24:52.015 "compare": false, 00:24:52.015 "compare_and_write": false, 00:24:52.015 "abort": true, 00:24:52.015 "nvme_admin": false, 00:24:52.015 "nvme_io": false 00:24:52.015 }, 00:24:52.015 "memory_domains": [ 00:24:52.015 { 00:24:52.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.015 "dma_device_type": 2 00:24:52.015 } 00:24:52.015 ], 00:24:52.015 "driver_specific": {} 00:24:52.015 } 00:24:52.015 ] 00:24:52.015 16:40:28 -- common/autotest_common.sh@895 -- # return 0 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.015 16:40:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:52.274 16:40:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:52.274 "name": "Existed_Raid", 00:24:52.274 "uuid": "84034ae0-d1ac-48f7-9a90-0dbf1918a90f", 00:24:52.274 "strip_size_kb": 64, 00:24:52.274 "state": "configuring", 00:24:52.274 "raid_level": "raid5f", 00:24:52.274 "superblock": true, 00:24:52.274 "num_base_bdevs": 4, 00:24:52.274 "num_base_bdevs_discovered": 1, 00:24:52.274 "num_base_bdevs_operational": 4, 00:24:52.274 "base_bdevs_list": [ 00:24:52.274 { 00:24:52.274 "name": "BaseBdev1", 00:24:52.274 "uuid": "d660b8eb-b3da-47ff-821d-4b1bacba3346", 00:24:52.274 "is_configured": true, 00:24:52.274 "data_offset": 2048, 00:24:52.274 "data_size": 63488 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "name": "BaseBdev2", 00:24:52.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.274 "is_configured": false, 00:24:52.274 "data_offset": 0, 00:24:52.274 "data_size": 0 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "name": "BaseBdev3", 00:24:52.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.274 "is_configured": false, 00:24:52.274 "data_offset": 0, 00:24:52.274 "data_size": 0 00:24:52.274 }, 00:24:52.274 { 00:24:52.274 "name": "BaseBdev4", 00:24:52.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.274 "is_configured": false, 00:24:52.274 "data_offset": 0, 00:24:52.274 "data_size": 0 00:24:52.274 } 00:24:52.274 ] 00:24:52.274 }' 00:24:52.274 16:40:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:52.274 16:40:28 -- common/autotest_common.sh@10 -- # set +x 00:24:52.841 16:40:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:53.100 [2024-07-11 16:40:29.756188] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:53.100 [2024-07-11 16:40:29.756348] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:53.100 16:40:29 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:24:53.100 16:40:29 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:53.359 16:40:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:53.617 BaseBdev1 00:24:53.617 16:40:30 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:24:53.617 16:40:30 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:24:53.617 16:40:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:53.617 16:40:30 -- common/autotest_common.sh@889 -- # local i 00:24:53.617 16:40:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:53.617 16:40:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:53.617 16:40:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:53.876 16:40:30 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:53.876 [ 00:24:53.876 { 00:24:53.876 "name": "BaseBdev1", 00:24:53.876 "aliases": [ 00:24:53.876 "a3c05074-60a2-4051-9b2c-bcf43cc3262d" 00:24:53.876 ], 00:24:53.876 "product_name": "Malloc disk", 00:24:53.876 "block_size": 512, 00:24:53.876 "num_blocks": 65536, 00:24:53.876 "uuid": "a3c05074-60a2-4051-9b2c-bcf43cc3262d", 00:24:53.876 "assigned_rate_limits": { 00:24:53.876 "rw_ios_per_sec": 0, 00:24:53.876 "rw_mbytes_per_sec": 0, 00:24:53.876 "r_mbytes_per_sec": 0, 00:24:53.876 "w_mbytes_per_sec": 0 00:24:53.876 }, 00:24:53.876 "claimed": false, 00:24:53.876 "zoned": false, 00:24:53.876 "supported_io_types": { 00:24:53.876 "read": true, 00:24:53.876 "write": true, 00:24:53.876 "unmap": true, 00:24:53.876 "write_zeroes": true, 00:24:53.876 "flush": true, 00:24:53.876 "reset": true, 00:24:53.876 "compare": false, 00:24:53.876 "compare_and_write": false, 00:24:53.876 "abort": true, 00:24:53.876 "nvme_admin": false, 00:24:53.876 "nvme_io": false 00:24:53.876 }, 00:24:53.876 "memory_domains": [ 00:24:53.876 { 00:24:53.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.876 "dma_device_type": 2 00:24:53.876 } 00:24:53.876 ], 00:24:53.876 "driver_specific": {} 00:24:53.876 } 00:24:53.876 ] 00:24:53.876 16:40:30 -- common/autotest_common.sh@895 -- # return 0 00:24:53.877 16:40:30 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:54.135 [2024-07-11 16:40:30.860692] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:54.135 [2024-07-11 16:40:30.862415] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:54.135 [2024-07-11 16:40:30.862608] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:54.135 [2024-07-11 16:40:30.862711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:54.135 [2024-07-11 16:40:30.862768] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:54.135 [2024-07-11 16:40:30.862909] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:54.135 [2024-07-11 16:40:30.863014] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.135 16:40:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:54.394 16:40:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:54.394 "name": "Existed_Raid", 00:24:54.394 "uuid": "37d0603c-491e-4bab-bfb4-d66c10d1fdfd", 00:24:54.394 "strip_size_kb": 64, 00:24:54.394 "state": "configuring", 00:24:54.394 "raid_level": "raid5f", 00:24:54.394 "superblock": true, 00:24:54.394 "num_base_bdevs": 4, 00:24:54.394 "num_base_bdevs_discovered": 1, 00:24:54.394 "num_base_bdevs_operational": 4, 00:24:54.394 "base_bdevs_list": [ 00:24:54.394 { 00:24:54.394 "name": "BaseBdev1", 00:24:54.394 "uuid": "a3c05074-60a2-4051-9b2c-bcf43cc3262d", 00:24:54.394 "is_configured": true, 00:24:54.394 "data_offset": 2048, 00:24:54.394 "data_size": 63488 00:24:54.394 }, 00:24:54.394 { 00:24:54.394 "name": "BaseBdev2", 00:24:54.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.394 "is_configured": false, 00:24:54.394 "data_offset": 0, 00:24:54.394 "data_size": 0 00:24:54.394 }, 00:24:54.394 { 00:24:54.394 "name": "BaseBdev3", 00:24:54.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.394 "is_configured": false, 00:24:54.394 "data_offset": 0, 00:24:54.394 "data_size": 0 00:24:54.394 }, 00:24:54.394 { 00:24:54.394 "name": "BaseBdev4", 00:24:54.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:54.394 "is_configured": false, 00:24:54.394 "data_offset": 0, 00:24:54.394 "data_size": 0 00:24:54.394 } 00:24:54.394 ] 00:24:54.394 }' 00:24:54.394 16:40:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:54.394 16:40:31 -- common/autotest_common.sh@10 -- # set +x 00:24:54.961 16:40:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:55.220 [2024-07-11 16:40:31.944587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:55.220 BaseBdev2 00:24:55.220 16:40:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:55.220 16:40:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:24:55.220 16:40:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:55.220 16:40:31 -- common/autotest_common.sh@889 -- # local i 00:24:55.220 16:40:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:55.220 16:40:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:55.220 16:40:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:55.479 16:40:32 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:55.738 [ 00:24:55.738 { 00:24:55.738 "name": "BaseBdev2", 00:24:55.738 "aliases": [ 00:24:55.738 "f9b61373-0305-4cfb-834a-e33208703a5b" 00:24:55.738 ], 00:24:55.738 "product_name": "Malloc disk", 00:24:55.738 "block_size": 512, 00:24:55.738 "num_blocks": 65536, 00:24:55.738 "uuid": "f9b61373-0305-4cfb-834a-e33208703a5b", 00:24:55.738 "assigned_rate_limits": { 00:24:55.738 "rw_ios_per_sec": 0, 00:24:55.738 "rw_mbytes_per_sec": 0, 00:24:55.738 "r_mbytes_per_sec": 0, 00:24:55.738 "w_mbytes_per_sec": 0 00:24:55.738 }, 00:24:55.738 "claimed": true, 00:24:55.738 "claim_type": "exclusive_write", 00:24:55.738 "zoned": false, 00:24:55.738 "supported_io_types": { 00:24:55.738 "read": true, 00:24:55.738 "write": true, 00:24:55.738 "unmap": true, 00:24:55.738 "write_zeroes": true, 00:24:55.738 "flush": true, 00:24:55.738 "reset": true, 00:24:55.738 "compare": false, 00:24:55.738 "compare_and_write": false, 00:24:55.738 "abort": true, 00:24:55.738 "nvme_admin": false, 00:24:55.738 "nvme_io": false 00:24:55.738 }, 00:24:55.738 "memory_domains": [ 00:24:55.738 { 00:24:55.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.738 "dma_device_type": 2 00:24:55.738 } 00:24:55.738 ], 00:24:55.738 "driver_specific": {} 00:24:55.738 } 00:24:55.738 ] 00:24:55.738 16:40:32 -- common/autotest_common.sh@895 -- # return 0 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:55.738 "name": "Existed_Raid", 00:24:55.738 "uuid": "37d0603c-491e-4bab-bfb4-d66c10d1fdfd", 00:24:55.738 "strip_size_kb": 64, 00:24:55.738 "state": "configuring", 00:24:55.738 "raid_level": "raid5f", 00:24:55.738 "superblock": true, 00:24:55.738 "num_base_bdevs": 4, 00:24:55.738 "num_base_bdevs_discovered": 2, 00:24:55.738 "num_base_bdevs_operational": 4, 00:24:55.738 "base_bdevs_list": [ 00:24:55.738 { 00:24:55.738 "name": "BaseBdev1", 00:24:55.738 "uuid": "a3c05074-60a2-4051-9b2c-bcf43cc3262d", 00:24:55.738 "is_configured": true, 00:24:55.738 "data_offset": 2048, 00:24:55.738 "data_size": 63488 00:24:55.738 }, 00:24:55.738 { 00:24:55.738 "name": "BaseBdev2", 00:24:55.738 "uuid": "f9b61373-0305-4cfb-834a-e33208703a5b", 00:24:55.738 "is_configured": true, 00:24:55.738 "data_offset": 2048, 00:24:55.738 "data_size": 63488 00:24:55.738 }, 00:24:55.738 { 00:24:55.738 "name": "BaseBdev3", 00:24:55.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.738 "is_configured": false, 00:24:55.738 "data_offset": 0, 00:24:55.738 "data_size": 0 00:24:55.738 }, 00:24:55.738 { 00:24:55.738 "name": "BaseBdev4", 00:24:55.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:55.738 "is_configured": false, 00:24:55.738 "data_offset": 0, 00:24:55.738 "data_size": 0 00:24:55.738 } 00:24:55.738 ] 00:24:55.738 }' 00:24:55.738 16:40:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:55.738 16:40:32 -- common/autotest_common.sh@10 -- # set +x 00:24:56.673 16:40:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:56.673 [2024-07-11 16:40:33.328421] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:56.673 BaseBdev3 00:24:56.673 16:40:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:56.673 16:40:33 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:24:56.673 16:40:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:56.673 16:40:33 -- common/autotest_common.sh@889 -- # local i 00:24:56.673 16:40:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:56.673 16:40:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:56.673 16:40:33 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:56.931 16:40:33 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:56.931 [ 00:24:56.931 { 00:24:56.931 "name": "BaseBdev3", 00:24:56.931 "aliases": [ 00:24:56.931 "cda27dd8-3f62-4e68-b45a-d8eb8c289090" 00:24:56.931 ], 00:24:56.931 "product_name": "Malloc disk", 00:24:56.931 "block_size": 512, 00:24:56.931 "num_blocks": 65536, 00:24:56.931 "uuid": "cda27dd8-3f62-4e68-b45a-d8eb8c289090", 00:24:56.931 "assigned_rate_limits": { 00:24:56.931 "rw_ios_per_sec": 0, 00:24:56.931 "rw_mbytes_per_sec": 0, 00:24:56.931 "r_mbytes_per_sec": 0, 00:24:56.931 "w_mbytes_per_sec": 0 00:24:56.931 }, 00:24:56.931 "claimed": true, 00:24:56.931 "claim_type": "exclusive_write", 00:24:56.931 "zoned": false, 00:24:56.931 "supported_io_types": { 00:24:56.931 "read": true, 00:24:56.931 "write": true, 00:24:56.931 "unmap": true, 00:24:56.931 "write_zeroes": true, 00:24:56.931 "flush": true, 00:24:56.931 "reset": true, 00:24:56.931 "compare": false, 00:24:56.931 "compare_and_write": false, 00:24:56.931 "abort": true, 00:24:56.931 "nvme_admin": false, 00:24:56.931 "nvme_io": false 00:24:56.931 }, 00:24:56.931 "memory_domains": [ 00:24:56.931 { 00:24:56.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.931 "dma_device_type": 2 00:24:56.931 } 00:24:56.931 ], 00:24:56.931 "driver_specific": {} 00:24:56.931 } 00:24:56.931 ] 00:24:56.931 16:40:33 -- common/autotest_common.sh@895 -- # return 0 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.931 16:40:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:57.197 16:40:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.197 "name": "Existed_Raid", 00:24:57.197 "uuid": "37d0603c-491e-4bab-bfb4-d66c10d1fdfd", 00:24:57.197 "strip_size_kb": 64, 00:24:57.197 "state": "configuring", 00:24:57.197 "raid_level": "raid5f", 00:24:57.197 "superblock": true, 00:24:57.197 "num_base_bdevs": 4, 00:24:57.197 "num_base_bdevs_discovered": 3, 00:24:57.197 "num_base_bdevs_operational": 4, 00:24:57.197 "base_bdevs_list": [ 00:24:57.197 { 00:24:57.197 "name": "BaseBdev1", 00:24:57.198 "uuid": "a3c05074-60a2-4051-9b2c-bcf43cc3262d", 00:24:57.198 "is_configured": true, 00:24:57.198 "data_offset": 2048, 00:24:57.198 "data_size": 63488 00:24:57.198 }, 00:24:57.198 { 00:24:57.198 "name": "BaseBdev2", 00:24:57.198 "uuid": "f9b61373-0305-4cfb-834a-e33208703a5b", 00:24:57.198 "is_configured": true, 00:24:57.198 "data_offset": 2048, 00:24:57.198 "data_size": 63488 00:24:57.198 }, 00:24:57.198 { 00:24:57.198 "name": "BaseBdev3", 00:24:57.198 "uuid": "cda27dd8-3f62-4e68-b45a-d8eb8c289090", 00:24:57.198 "is_configured": true, 00:24:57.198 "data_offset": 2048, 00:24:57.198 "data_size": 63488 00:24:57.198 }, 00:24:57.198 { 00:24:57.198 "name": "BaseBdev4", 00:24:57.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.198 "is_configured": false, 00:24:57.198 "data_offset": 0, 00:24:57.198 "data_size": 0 00:24:57.198 } 00:24:57.198 ] 00:24:57.198 }' 00:24:57.198 16:40:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.198 16:40:33 -- common/autotest_common.sh@10 -- # set +x 00:24:57.808 16:40:34 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:58.081 [2024-07-11 16:40:34.774116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:58.081 [2024-07-11 16:40:34.774413] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:24:58.081 [2024-07-11 16:40:34.774438] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:58.081 [2024-07-11 16:40:34.774545] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:58.081 BaseBdev4 00:24:58.081 [2024-07-11 16:40:34.780546] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:24:58.081 [2024-07-11 16:40:34.780571] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:24:58.081 [2024-07-11 16:40:34.780804] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.081 16:40:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:58.081 16:40:34 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:24:58.081 16:40:34 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:24:58.081 16:40:34 -- common/autotest_common.sh@889 -- # local i 00:24:58.081 16:40:34 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:24:58.081 16:40:34 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:24:58.081 16:40:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:58.338 16:40:34 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:58.338 [ 00:24:58.338 { 00:24:58.338 "name": "BaseBdev4", 00:24:58.338 "aliases": [ 00:24:58.338 "b5ab62ff-7d9d-4ebe-a95c-5585d7e68edb" 00:24:58.338 ], 00:24:58.338 "product_name": "Malloc disk", 00:24:58.338 "block_size": 512, 00:24:58.338 "num_blocks": 65536, 00:24:58.338 "uuid": "b5ab62ff-7d9d-4ebe-a95c-5585d7e68edb", 00:24:58.338 "assigned_rate_limits": { 00:24:58.338 "rw_ios_per_sec": 0, 00:24:58.338 "rw_mbytes_per_sec": 0, 00:24:58.338 "r_mbytes_per_sec": 0, 00:24:58.338 "w_mbytes_per_sec": 0 00:24:58.338 }, 00:24:58.338 "claimed": true, 00:24:58.339 "claim_type": "exclusive_write", 00:24:58.339 "zoned": false, 00:24:58.339 "supported_io_types": { 00:24:58.339 "read": true, 00:24:58.339 "write": true, 00:24:58.339 "unmap": true, 00:24:58.339 "write_zeroes": true, 00:24:58.339 "flush": true, 00:24:58.339 "reset": true, 00:24:58.339 "compare": false, 00:24:58.339 "compare_and_write": false, 00:24:58.339 "abort": true, 00:24:58.339 "nvme_admin": false, 00:24:58.339 "nvme_io": false 00:24:58.339 }, 00:24:58.339 "memory_domains": [ 00:24:58.339 { 00:24:58.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.339 "dma_device_type": 2 00:24:58.339 } 00:24:58.339 ], 00:24:58.339 "driver_specific": {} 00:24:58.339 } 00:24:58.339 ] 00:24:58.597 16:40:35 -- common/autotest_common.sh@895 -- # return 0 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:58.597 "name": "Existed_Raid", 00:24:58.597 "uuid": "37d0603c-491e-4bab-bfb4-d66c10d1fdfd", 00:24:58.597 "strip_size_kb": 64, 00:24:58.597 "state": "online", 00:24:58.597 "raid_level": "raid5f", 00:24:58.597 "superblock": true, 00:24:58.597 "num_base_bdevs": 4, 00:24:58.597 "num_base_bdevs_discovered": 4, 00:24:58.597 "num_base_bdevs_operational": 4, 00:24:58.597 "base_bdevs_list": [ 00:24:58.597 { 00:24:58.597 "name": "BaseBdev1", 00:24:58.597 "uuid": "a3c05074-60a2-4051-9b2c-bcf43cc3262d", 00:24:58.597 "is_configured": true, 00:24:58.597 "data_offset": 2048, 00:24:58.597 "data_size": 63488 00:24:58.597 }, 00:24:58.597 { 00:24:58.597 "name": "BaseBdev2", 00:24:58.597 "uuid": "f9b61373-0305-4cfb-834a-e33208703a5b", 00:24:58.597 "is_configured": true, 00:24:58.597 "data_offset": 2048, 00:24:58.597 "data_size": 63488 00:24:58.597 }, 00:24:58.597 { 00:24:58.597 "name": "BaseBdev3", 00:24:58.597 "uuid": "cda27dd8-3f62-4e68-b45a-d8eb8c289090", 00:24:58.597 "is_configured": true, 00:24:58.597 "data_offset": 2048, 00:24:58.597 "data_size": 63488 00:24:58.597 }, 00:24:58.597 { 00:24:58.597 "name": "BaseBdev4", 00:24:58.597 "uuid": "b5ab62ff-7d9d-4ebe-a95c-5585d7e68edb", 00:24:58.597 "is_configured": true, 00:24:58.597 "data_offset": 2048, 00:24:58.597 "data_size": 63488 00:24:58.597 } 00:24:58.597 ] 00:24:58.597 }' 00:24:58.597 16:40:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:58.597 16:40:35 -- common/autotest_common.sh@10 -- # set +x 00:24:59.532 16:40:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:59.532 [2024-07-11 16:40:36.241581] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.532 16:40:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:59.791 16:40:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:59.791 "name": "Existed_Raid", 00:24:59.791 "uuid": "37d0603c-491e-4bab-bfb4-d66c10d1fdfd", 00:24:59.791 "strip_size_kb": 64, 00:24:59.791 "state": "online", 00:24:59.791 "raid_level": "raid5f", 00:24:59.791 "superblock": true, 00:24:59.791 "num_base_bdevs": 4, 00:24:59.791 "num_base_bdevs_discovered": 3, 00:24:59.791 "num_base_bdevs_operational": 3, 00:24:59.791 "base_bdevs_list": [ 00:24:59.791 { 00:24:59.791 "name": null, 00:24:59.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:59.791 "is_configured": false, 00:24:59.791 "data_offset": 2048, 00:24:59.791 "data_size": 63488 00:24:59.791 }, 00:24:59.791 { 00:24:59.791 "name": "BaseBdev2", 00:24:59.791 "uuid": "f9b61373-0305-4cfb-834a-e33208703a5b", 00:24:59.791 "is_configured": true, 00:24:59.791 "data_offset": 2048, 00:24:59.791 "data_size": 63488 00:24:59.791 }, 00:24:59.791 { 00:24:59.791 "name": "BaseBdev3", 00:24:59.791 "uuid": "cda27dd8-3f62-4e68-b45a-d8eb8c289090", 00:24:59.791 "is_configured": true, 00:24:59.791 "data_offset": 2048, 00:24:59.791 "data_size": 63488 00:24:59.791 }, 00:24:59.791 { 00:24:59.791 "name": "BaseBdev4", 00:24:59.791 "uuid": "b5ab62ff-7d9d-4ebe-a95c-5585d7e68edb", 00:24:59.791 "is_configured": true, 00:24:59.791 "data_offset": 2048, 00:24:59.791 "data_size": 63488 00:24:59.791 } 00:24:59.791 ] 00:24:59.791 }' 00:24:59.791 16:40:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:59.791 16:40:36 -- common/autotest_common.sh@10 -- # set +x 00:25:00.359 16:40:37 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:25:00.359 16:40:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:00.359 16:40:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.359 16:40:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:00.619 16:40:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:00.619 16:40:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:00.619 16:40:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:00.878 [2024-07-11 16:40:37.553547] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:00.878 [2024-07-11 16:40:37.553580] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:00.878 [2024-07-11 16:40:37.553633] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:00.878 16:40:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:00.878 16:40:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:00.878 16:40:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.878 16:40:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:01.137 16:40:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:01.137 16:40:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:01.137 16:40:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:01.397 [2024-07-11 16:40:38.044360] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:01.397 16:40:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:01.397 16:40:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:01.397 16:40:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:25:01.397 16:40:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.656 16:40:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:25:01.656 16:40:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:01.656 16:40:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:01.915 [2024-07-11 16:40:38.531056] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:01.915 [2024-07-11 16:40:38.531108] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:25:01.915 16:40:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:25:01.915 16:40:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:25:01.915 16:40:38 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.915 16:40:38 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:25:02.173 16:40:38 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:25:02.173 16:40:38 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:25:02.173 16:40:38 -- bdev/bdev_raid.sh@287 -- # killprocess 133443 00:25:02.173 16:40:38 -- common/autotest_common.sh@926 -- # '[' -z 133443 ']' 00:25:02.173 16:40:38 -- common/autotest_common.sh@930 -- # kill -0 133443 00:25:02.173 16:40:38 -- common/autotest_common.sh@931 -- # uname 00:25:02.173 16:40:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:02.173 16:40:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133443 00:25:02.173 16:40:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:02.173 16:40:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:02.173 16:40:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133443' 00:25:02.173 killing process with pid 133443 00:25:02.173 16:40:38 -- common/autotest_common.sh@945 -- # kill 133443 00:25:02.173 16:40:38 -- common/autotest_common.sh@950 -- # wait 133443 00:25:02.173 [2024-07-11 16:40:38.810873] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:02.173 [2024-07-11 16:40:38.811008] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:03.108 ************************************ 00:25:03.108 END TEST raid5f_state_function_test_sb 00:25:03.108 ************************************ 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@289 -- # return 0 00:25:03.108 00:25:03.108 real 0m14.531s 00:25:03.108 user 0m26.116s 00:25:03.108 sys 0m1.556s 00:25:03.108 16:40:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:03.108 16:40:39 -- common/autotest_common.sh@10 -- # set +x 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:25:03.108 16:40:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:25:03.108 16:40:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:03.108 16:40:39 -- common/autotest_common.sh@10 -- # set +x 00:25:03.108 ************************************ 00:25:03.108 START TEST raid5f_superblock_test 00:25:03.108 ************************************ 00:25:03.108 16:40:39 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid5f 4 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@357 -- # raid_pid=133924 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@358 -- # waitforlisten 133924 /var/tmp/spdk-raid.sock 00:25:03.108 16:40:39 -- common/autotest_common.sh@819 -- # '[' -z 133924 ']' 00:25:03.108 16:40:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:03.108 16:40:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.108 16:40:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:03.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:03.108 16:40:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.108 16:40:39 -- common/autotest_common.sh@10 -- # set +x 00:25:03.108 16:40:39 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:03.108 [2024-07-11 16:40:39.910261] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:03.108 [2024-07-11 16:40:39.910620] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133924 ] 00:25:03.366 [2024-07-11 16:40:40.081839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.625 [2024-07-11 16:40:40.334954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.883 [2024-07-11 16:40:40.505483] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:04.142 16:40:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:04.142 16:40:40 -- common/autotest_common.sh@852 -- # return 0 00:25:04.142 16:40:40 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:25:04.142 16:40:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:04.142 16:40:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:25:04.142 16:40:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:25:04.142 16:40:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:04.142 16:40:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:04.142 16:40:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:04.142 16:40:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:04.142 16:40:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:04.400 malloc1 00:25:04.400 16:40:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:04.658 [2024-07-11 16:40:41.254030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:04.658 [2024-07-11 16:40:41.254137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.658 [2024-07-11 16:40:41.254169] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:25:04.658 [2024-07-11 16:40:41.254213] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.658 [2024-07-11 16:40:41.256159] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.658 [2024-07-11 16:40:41.256204] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:04.658 pt1 00:25:04.658 16:40:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:04.658 16:40:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:04.658 16:40:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:25:04.658 16:40:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:25:04.658 16:40:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:04.658 16:40:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:04.658 16:40:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:04.658 16:40:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:04.658 16:40:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:04.917 malloc2 00:25:04.917 16:40:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:05.175 [2024-07-11 16:40:41.797468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:05.175 [2024-07-11 16:40:41.797565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.175 [2024-07-11 16:40:41.797606] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:05.175 [2024-07-11 16:40:41.797657] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.175 [2024-07-11 16:40:41.799984] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.175 [2024-07-11 16:40:41.800063] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:05.175 pt2 00:25:05.175 16:40:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:05.175 16:40:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:05.175 16:40:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:25:05.175 16:40:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:25:05.175 16:40:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:05.175 16:40:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:05.175 16:40:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:05.175 16:40:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:05.175 16:40:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:05.432 malloc3 00:25:05.432 16:40:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:05.432 [2024-07-11 16:40:42.211233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:05.432 [2024-07-11 16:40:42.211329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.432 [2024-07-11 16:40:42.211368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:05.432 [2024-07-11 16:40:42.211415] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.432 [2024-07-11 16:40:42.213564] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.432 [2024-07-11 16:40:42.213632] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:05.432 pt3 00:25:05.432 16:40:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:05.432 16:40:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:05.432 16:40:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:25:05.432 16:40:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:25:05.432 16:40:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:05.432 16:40:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:05.432 16:40:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:25:05.432 16:40:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:05.432 16:40:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:05.998 malloc4 00:25:05.998 16:40:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:05.998 [2024-07-11 16:40:42.701052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:05.998 [2024-07-11 16:40:42.701173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.998 [2024-07-11 16:40:42.701214] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:05.998 [2024-07-11 16:40:42.701270] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.998 [2024-07-11 16:40:42.703340] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.998 [2024-07-11 16:40:42.703420] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:05.998 pt4 00:25:05.998 16:40:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:25:05.998 16:40:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:25:05.998 16:40:42 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:06.256 [2024-07-11 16:40:42.901183] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:06.256 [2024-07-11 16:40:42.903024] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:06.256 [2024-07-11 16:40:42.903115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:06.256 [2024-07-11 16:40:42.903240] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:06.256 [2024-07-11 16:40:42.903496] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:25:06.256 [2024-07-11 16:40:42.903521] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:06.256 [2024-07-11 16:40:42.903650] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:06.256 [2024-07-11 16:40:42.910227] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:25:06.256 [2024-07-11 16:40:42.910253] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:25:06.256 [2024-07-11 16:40:42.910494] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.256 16:40:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.514 16:40:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:06.514 "name": "raid_bdev1", 00:25:06.514 "uuid": "58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c", 00:25:06.514 "strip_size_kb": 64, 00:25:06.514 "state": "online", 00:25:06.514 "raid_level": "raid5f", 00:25:06.514 "superblock": true, 00:25:06.514 "num_base_bdevs": 4, 00:25:06.514 "num_base_bdevs_discovered": 4, 00:25:06.514 "num_base_bdevs_operational": 4, 00:25:06.514 "base_bdevs_list": [ 00:25:06.514 { 00:25:06.514 "name": "pt1", 00:25:06.514 "uuid": "88204a5b-c57c-5fe5-8f31-0f96b157ad3b", 00:25:06.514 "is_configured": true, 00:25:06.514 "data_offset": 2048, 00:25:06.514 "data_size": 63488 00:25:06.514 }, 00:25:06.514 { 00:25:06.514 "name": "pt2", 00:25:06.514 "uuid": "381d9deb-361f-5979-a34e-fa2a8863b13d", 00:25:06.514 "is_configured": true, 00:25:06.514 "data_offset": 2048, 00:25:06.514 "data_size": 63488 00:25:06.514 }, 00:25:06.514 { 00:25:06.514 "name": "pt3", 00:25:06.514 "uuid": "f2af927c-9d89-596b-953b-b05a847d9d83", 00:25:06.514 "is_configured": true, 00:25:06.514 "data_offset": 2048, 00:25:06.514 "data_size": 63488 00:25:06.514 }, 00:25:06.514 { 00:25:06.514 "name": "pt4", 00:25:06.514 "uuid": "14c43481-8ab0-5e95-9e55-ec3e61921e45", 00:25:06.514 "is_configured": true, 00:25:06.514 "data_offset": 2048, 00:25:06.514 "data_size": 63488 00:25:06.514 } 00:25:06.514 ] 00:25:06.514 }' 00:25:06.514 16:40:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:06.514 16:40:43 -- common/autotest_common.sh@10 -- # set +x 00:25:07.079 16:40:43 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:07.079 16:40:43 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:25:07.337 [2024-07-11 16:40:44.005363] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:07.337 16:40:44 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c 00:25:07.337 16:40:44 -- bdev/bdev_raid.sh@380 -- # '[' -z 58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c ']' 00:25:07.337 16:40:44 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:07.595 [2024-07-11 16:40:44.257231] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:07.595 [2024-07-11 16:40:44.257281] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:07.595 [2024-07-11 16:40:44.257362] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:07.595 [2024-07-11 16:40:44.257443] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:07.595 [2024-07-11 16:40:44.257454] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:25:07.595 16:40:44 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.595 16:40:44 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:25:07.854 16:40:44 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:25:07.854 16:40:44 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:25:07.854 16:40:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:07.854 16:40:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:08.113 16:40:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:08.113 16:40:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:08.372 16:40:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:08.372 16:40:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:08.372 16:40:45 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:25:08.372 16:40:45 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:08.631 16:40:45 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:08.631 16:40:45 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:08.890 16:40:45 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:25:08.890 16:40:45 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:08.890 16:40:45 -- common/autotest_common.sh@640 -- # local es=0 00:25:08.890 16:40:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:08.890 16:40:45 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:08.890 16:40:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:08.890 16:40:45 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:08.890 16:40:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:08.890 16:40:45 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:08.890 16:40:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:08.890 16:40:45 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:08.890 16:40:45 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:08.890 16:40:45 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:08.890 [2024-07-11 16:40:45.697475] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:09.148 [2024-07-11 16:40:45.699084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:09.148 [2024-07-11 16:40:45.699156] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:09.148 [2024-07-11 16:40:45.699199] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:09.148 [2024-07-11 16:40:45.699249] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:25:09.148 [2024-07-11 16:40:45.699346] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:25:09.148 [2024-07-11 16:40:45.699395] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:25:09.148 [2024-07-11 16:40:45.699448] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:25:09.148 [2024-07-11 16:40:45.699473] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:09.148 [2024-07-11 16:40:45.699483] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:25:09.148 request: 00:25:09.148 { 00:25:09.148 "name": "raid_bdev1", 00:25:09.148 "raid_level": "raid5f", 00:25:09.148 "base_bdevs": [ 00:25:09.148 "malloc1", 00:25:09.148 "malloc2", 00:25:09.148 "malloc3", 00:25:09.148 "malloc4" 00:25:09.148 ], 00:25:09.148 "superblock": false, 00:25:09.148 "strip_size_kb": 64, 00:25:09.148 "method": "bdev_raid_create", 00:25:09.148 "req_id": 1 00:25:09.148 } 00:25:09.148 Got JSON-RPC error response 00:25:09.148 response: 00:25:09.148 { 00:25:09.148 "code": -17, 00:25:09.148 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:09.148 } 00:25:09.148 16:40:45 -- common/autotest_common.sh@643 -- # es=1 00:25:09.148 16:40:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:09.148 16:40:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:09.148 16:40:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:09.149 16:40:45 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.149 16:40:45 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:25:09.149 16:40:45 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:25:09.149 16:40:45 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:25:09.149 16:40:45 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:09.406 [2024-07-11 16:40:46.053495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:09.406 [2024-07-11 16:40:46.053572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.406 [2024-07-11 16:40:46.053600] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:09.406 [2024-07-11 16:40:46.053622] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.406 [2024-07-11 16:40:46.055496] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.406 [2024-07-11 16:40:46.055556] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:09.406 [2024-07-11 16:40:46.055674] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:09.406 [2024-07-11 16:40:46.055728] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:09.406 pt1 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.406 16:40:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.663 16:40:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:09.663 "name": "raid_bdev1", 00:25:09.663 "uuid": "58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c", 00:25:09.663 "strip_size_kb": 64, 00:25:09.663 "state": "configuring", 00:25:09.663 "raid_level": "raid5f", 00:25:09.663 "superblock": true, 00:25:09.663 "num_base_bdevs": 4, 00:25:09.663 "num_base_bdevs_discovered": 1, 00:25:09.663 "num_base_bdevs_operational": 4, 00:25:09.663 "base_bdevs_list": [ 00:25:09.663 { 00:25:09.663 "name": "pt1", 00:25:09.663 "uuid": "88204a5b-c57c-5fe5-8f31-0f96b157ad3b", 00:25:09.663 "is_configured": true, 00:25:09.663 "data_offset": 2048, 00:25:09.663 "data_size": 63488 00:25:09.663 }, 00:25:09.663 { 00:25:09.663 "name": null, 00:25:09.663 "uuid": "381d9deb-361f-5979-a34e-fa2a8863b13d", 00:25:09.663 "is_configured": false, 00:25:09.663 "data_offset": 2048, 00:25:09.663 "data_size": 63488 00:25:09.663 }, 00:25:09.663 { 00:25:09.663 "name": null, 00:25:09.663 "uuid": "f2af927c-9d89-596b-953b-b05a847d9d83", 00:25:09.663 "is_configured": false, 00:25:09.663 "data_offset": 2048, 00:25:09.663 "data_size": 63488 00:25:09.663 }, 00:25:09.663 { 00:25:09.663 "name": null, 00:25:09.663 "uuid": "14c43481-8ab0-5e95-9e55-ec3e61921e45", 00:25:09.663 "is_configured": false, 00:25:09.663 "data_offset": 2048, 00:25:09.663 "data_size": 63488 00:25:09.663 } 00:25:09.663 ] 00:25:09.663 }' 00:25:09.663 16:40:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:09.663 16:40:46 -- common/autotest_common.sh@10 -- # set +x 00:25:10.594 16:40:47 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:25:10.594 16:40:47 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:10.594 [2024-07-11 16:40:47.229818] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:10.594 [2024-07-11 16:40:47.229911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.594 [2024-07-11 16:40:47.229951] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:10.594 [2024-07-11 16:40:47.229971] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.594 [2024-07-11 16:40:47.230504] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.594 [2024-07-11 16:40:47.230593] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:10.594 [2024-07-11 16:40:47.230726] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:10.594 [2024-07-11 16:40:47.230752] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:10.594 pt2 00:25:10.594 16:40:47 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:10.853 [2024-07-11 16:40:47.425830] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:10.853 "name": "raid_bdev1", 00:25:10.853 "uuid": "58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c", 00:25:10.853 "strip_size_kb": 64, 00:25:10.853 "state": "configuring", 00:25:10.853 "raid_level": "raid5f", 00:25:10.853 "superblock": true, 00:25:10.853 "num_base_bdevs": 4, 00:25:10.853 "num_base_bdevs_discovered": 1, 00:25:10.853 "num_base_bdevs_operational": 4, 00:25:10.853 "base_bdevs_list": [ 00:25:10.853 { 00:25:10.853 "name": "pt1", 00:25:10.853 "uuid": "88204a5b-c57c-5fe5-8f31-0f96b157ad3b", 00:25:10.853 "is_configured": true, 00:25:10.853 "data_offset": 2048, 00:25:10.853 "data_size": 63488 00:25:10.853 }, 00:25:10.853 { 00:25:10.853 "name": null, 00:25:10.853 "uuid": "381d9deb-361f-5979-a34e-fa2a8863b13d", 00:25:10.853 "is_configured": false, 00:25:10.853 "data_offset": 2048, 00:25:10.853 "data_size": 63488 00:25:10.853 }, 00:25:10.853 { 00:25:10.853 "name": null, 00:25:10.853 "uuid": "f2af927c-9d89-596b-953b-b05a847d9d83", 00:25:10.853 "is_configured": false, 00:25:10.853 "data_offset": 2048, 00:25:10.853 "data_size": 63488 00:25:10.853 }, 00:25:10.853 { 00:25:10.853 "name": null, 00:25:10.853 "uuid": "14c43481-8ab0-5e95-9e55-ec3e61921e45", 00:25:10.853 "is_configured": false, 00:25:10.853 "data_offset": 2048, 00:25:10.853 "data_size": 63488 00:25:10.853 } 00:25:10.853 ] 00:25:10.853 }' 00:25:10.853 16:40:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:10.853 16:40:47 -- common/autotest_common.sh@10 -- # set +x 00:25:11.420 16:40:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:25:11.420 16:40:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:11.420 16:40:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:11.679 [2024-07-11 16:40:48.382012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:11.679 [2024-07-11 16:40:48.382101] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.679 [2024-07-11 16:40:48.382136] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:11.679 [2024-07-11 16:40:48.382156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.679 [2024-07-11 16:40:48.382640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.679 [2024-07-11 16:40:48.382744] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:11.679 [2024-07-11 16:40:48.382853] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:11.679 [2024-07-11 16:40:48.382893] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:11.679 pt2 00:25:11.679 16:40:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:11.679 16:40:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:11.679 16:40:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:11.937 [2024-07-11 16:40:48.578039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:11.937 [2024-07-11 16:40:48.578118] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.937 [2024-07-11 16:40:48.578145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:11.937 [2024-07-11 16:40:48.578167] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.937 [2024-07-11 16:40:48.578614] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.937 [2024-07-11 16:40:48.578671] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:11.937 [2024-07-11 16:40:48.578786] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:11.937 [2024-07-11 16:40:48.578810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:11.937 pt3 00:25:11.937 16:40:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:11.937 16:40:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:11.937 16:40:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:12.196 [2024-07-11 16:40:48.764234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:12.196 [2024-07-11 16:40:48.764315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.196 [2024-07-11 16:40:48.764346] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:12.196 [2024-07-11 16:40:48.764366] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.196 [2024-07-11 16:40:48.764789] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.196 [2024-07-11 16:40:48.764855] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:12.196 [2024-07-11 16:40:48.764964] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:12.196 [2024-07-11 16:40:48.764996] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:12.196 [2024-07-11 16:40:48.765131] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:25:12.196 [2024-07-11 16:40:48.765144] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:12.196 [2024-07-11 16:40:48.765238] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:12.196 [2024-07-11 16:40:48.770676] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:25:12.196 [2024-07-11 16:40:48.770702] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:25:12.196 [2024-07-11 16:40:48.770903] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.196 pt4 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:12.196 "name": "raid_bdev1", 00:25:12.196 "uuid": "58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c", 00:25:12.196 "strip_size_kb": 64, 00:25:12.196 "state": "online", 00:25:12.196 "raid_level": "raid5f", 00:25:12.196 "superblock": true, 00:25:12.196 "num_base_bdevs": 4, 00:25:12.196 "num_base_bdevs_discovered": 4, 00:25:12.196 "num_base_bdevs_operational": 4, 00:25:12.196 "base_bdevs_list": [ 00:25:12.196 { 00:25:12.196 "name": "pt1", 00:25:12.196 "uuid": "88204a5b-c57c-5fe5-8f31-0f96b157ad3b", 00:25:12.196 "is_configured": true, 00:25:12.196 "data_offset": 2048, 00:25:12.196 "data_size": 63488 00:25:12.196 }, 00:25:12.196 { 00:25:12.196 "name": "pt2", 00:25:12.196 "uuid": "381d9deb-361f-5979-a34e-fa2a8863b13d", 00:25:12.196 "is_configured": true, 00:25:12.196 "data_offset": 2048, 00:25:12.196 "data_size": 63488 00:25:12.196 }, 00:25:12.196 { 00:25:12.196 "name": "pt3", 00:25:12.196 "uuid": "f2af927c-9d89-596b-953b-b05a847d9d83", 00:25:12.196 "is_configured": true, 00:25:12.196 "data_offset": 2048, 00:25:12.196 "data_size": 63488 00:25:12.196 }, 00:25:12.196 { 00:25:12.196 "name": "pt4", 00:25:12.196 "uuid": "14c43481-8ab0-5e95-9e55-ec3e61921e45", 00:25:12.196 "is_configured": true, 00:25:12.196 "data_offset": 2048, 00:25:12.196 "data_size": 63488 00:25:12.196 } 00:25:12.196 ] 00:25:12.196 }' 00:25:12.196 16:40:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:12.196 16:40:48 -- common/autotest_common.sh@10 -- # set +x 00:25:13.132 16:40:49 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:13.132 16:40:49 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:25:13.132 [2024-07-11 16:40:49.761620] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:13.132 16:40:49 -- bdev/bdev_raid.sh@430 -- # '[' 58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c '!=' 58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c ']' 00:25:13.132 16:40:49 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:25:13.132 16:40:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:25:13.132 16:40:49 -- bdev/bdev_raid.sh@196 -- # return 0 00:25:13.132 16:40:49 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:13.390 [2024-07-11 16:40:49.941553] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.390 16:40:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:13.390 16:40:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:13.390 "name": "raid_bdev1", 00:25:13.390 "uuid": "58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c", 00:25:13.390 "strip_size_kb": 64, 00:25:13.390 "state": "online", 00:25:13.390 "raid_level": "raid5f", 00:25:13.390 "superblock": true, 00:25:13.390 "num_base_bdevs": 4, 00:25:13.390 "num_base_bdevs_discovered": 3, 00:25:13.390 "num_base_bdevs_operational": 3, 00:25:13.390 "base_bdevs_list": [ 00:25:13.390 { 00:25:13.390 "name": null, 00:25:13.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:13.390 "is_configured": false, 00:25:13.390 "data_offset": 2048, 00:25:13.390 "data_size": 63488 00:25:13.390 }, 00:25:13.390 { 00:25:13.390 "name": "pt2", 00:25:13.390 "uuid": "381d9deb-361f-5979-a34e-fa2a8863b13d", 00:25:13.390 "is_configured": true, 00:25:13.390 "data_offset": 2048, 00:25:13.390 "data_size": 63488 00:25:13.390 }, 00:25:13.390 { 00:25:13.390 "name": "pt3", 00:25:13.390 "uuid": "f2af927c-9d89-596b-953b-b05a847d9d83", 00:25:13.390 "is_configured": true, 00:25:13.390 "data_offset": 2048, 00:25:13.390 "data_size": 63488 00:25:13.390 }, 00:25:13.390 { 00:25:13.390 "name": "pt4", 00:25:13.390 "uuid": "14c43481-8ab0-5e95-9e55-ec3e61921e45", 00:25:13.390 "is_configured": true, 00:25:13.390 "data_offset": 2048, 00:25:13.390 "data_size": 63488 00:25:13.390 } 00:25:13.390 ] 00:25:13.390 }' 00:25:13.390 16:40:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:13.390 16:40:50 -- common/autotest_common.sh@10 -- # set +x 00:25:14.395 16:40:50 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:14.395 [2024-07-11 16:40:51.021754] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:14.395 [2024-07-11 16:40:51.021784] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:14.395 [2024-07-11 16:40:51.021850] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:14.395 [2024-07-11 16:40:51.021921] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:14.395 [2024-07-11 16:40:51.021931] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:25:14.395 16:40:51 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.395 16:40:51 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:25:14.670 16:40:51 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:25:14.670 16:40:51 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:25:14.670 16:40:51 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:25:14.670 16:40:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:14.670 16:40:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:14.932 16:40:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:14.932 16:40:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:14.932 16:40:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:15.191 16:40:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:15.191 16:40:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:15.191 16:40:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:15.449 16:40:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:25:15.449 16:40:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:25:15.449 16:40:51 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:25:15.449 16:40:51 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:15.449 16:40:51 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:15.449 [2024-07-11 16:40:52.169276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:15.449 [2024-07-11 16:40:52.169375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.449 [2024-07-11 16:40:52.169417] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:15.449 [2024-07-11 16:40:52.169451] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.449 [2024-07-11 16:40:52.172479] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.449 [2024-07-11 16:40:52.172564] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:15.449 [2024-07-11 16:40:52.172741] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:15.449 [2024-07-11 16:40:52.172812] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:15.449 pt2 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.449 16:40:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.708 16:40:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:15.708 "name": "raid_bdev1", 00:25:15.708 "uuid": "58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c", 00:25:15.708 "strip_size_kb": 64, 00:25:15.708 "state": "configuring", 00:25:15.708 "raid_level": "raid5f", 00:25:15.708 "superblock": true, 00:25:15.708 "num_base_bdevs": 4, 00:25:15.708 "num_base_bdevs_discovered": 1, 00:25:15.708 "num_base_bdevs_operational": 3, 00:25:15.708 "base_bdevs_list": [ 00:25:15.708 { 00:25:15.708 "name": null, 00:25:15.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:15.708 "is_configured": false, 00:25:15.708 "data_offset": 2048, 00:25:15.708 "data_size": 63488 00:25:15.708 }, 00:25:15.708 { 00:25:15.708 "name": "pt2", 00:25:15.708 "uuid": "381d9deb-361f-5979-a34e-fa2a8863b13d", 00:25:15.708 "is_configured": true, 00:25:15.708 "data_offset": 2048, 00:25:15.708 "data_size": 63488 00:25:15.708 }, 00:25:15.708 { 00:25:15.708 "name": null, 00:25:15.708 "uuid": "f2af927c-9d89-596b-953b-b05a847d9d83", 00:25:15.708 "is_configured": false, 00:25:15.708 "data_offset": 2048, 00:25:15.708 "data_size": 63488 00:25:15.708 }, 00:25:15.708 { 00:25:15.708 "name": null, 00:25:15.708 "uuid": "14c43481-8ab0-5e95-9e55-ec3e61921e45", 00:25:15.708 "is_configured": false, 00:25:15.708 "data_offset": 2048, 00:25:15.708 "data_size": 63488 00:25:15.708 } 00:25:15.708 ] 00:25:15.708 }' 00:25:15.708 16:40:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:15.708 16:40:52 -- common/autotest_common.sh@10 -- # set +x 00:25:16.275 16:40:53 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:16.275 16:40:53 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:16.275 16:40:53 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:16.533 [2024-07-11 16:40:53.285797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:16.533 [2024-07-11 16:40:53.286285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.533 [2024-07-11 16:40:53.286438] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:25:16.533 [2024-07-11 16:40:53.286562] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.533 [2024-07-11 16:40:53.287154] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.533 [2024-07-11 16:40:53.287328] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:16.533 [2024-07-11 16:40:53.287557] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:16.533 [2024-07-11 16:40:53.287599] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:16.533 pt3 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.533 16:40:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.791 16:40:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:16.791 "name": "raid_bdev1", 00:25:16.791 "uuid": "58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c", 00:25:16.791 "strip_size_kb": 64, 00:25:16.791 "state": "configuring", 00:25:16.791 "raid_level": "raid5f", 00:25:16.791 "superblock": true, 00:25:16.791 "num_base_bdevs": 4, 00:25:16.791 "num_base_bdevs_discovered": 2, 00:25:16.791 "num_base_bdevs_operational": 3, 00:25:16.791 "base_bdevs_list": [ 00:25:16.791 { 00:25:16.791 "name": null, 00:25:16.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:16.791 "is_configured": false, 00:25:16.791 "data_offset": 2048, 00:25:16.791 "data_size": 63488 00:25:16.792 }, 00:25:16.792 { 00:25:16.792 "name": "pt2", 00:25:16.792 "uuid": "381d9deb-361f-5979-a34e-fa2a8863b13d", 00:25:16.792 "is_configured": true, 00:25:16.792 "data_offset": 2048, 00:25:16.792 "data_size": 63488 00:25:16.792 }, 00:25:16.792 { 00:25:16.792 "name": "pt3", 00:25:16.792 "uuid": "f2af927c-9d89-596b-953b-b05a847d9d83", 00:25:16.792 "is_configured": true, 00:25:16.792 "data_offset": 2048, 00:25:16.792 "data_size": 63488 00:25:16.792 }, 00:25:16.792 { 00:25:16.792 "name": null, 00:25:16.792 "uuid": "14c43481-8ab0-5e95-9e55-ec3e61921e45", 00:25:16.792 "is_configured": false, 00:25:16.792 "data_offset": 2048, 00:25:16.792 "data_size": 63488 00:25:16.792 } 00:25:16.792 ] 00:25:16.792 }' 00:25:16.792 16:40:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:16.792 16:40:53 -- common/autotest_common.sh@10 -- # set +x 00:25:17.359 16:40:54 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:25:17.359 16:40:54 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:25:17.359 16:40:54 -- bdev/bdev_raid.sh@462 -- # i=3 00:25:17.359 16:40:54 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:17.618 [2024-07-11 16:40:54.268491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:17.618 [2024-07-11 16:40:54.268738] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.618 [2024-07-11 16:40:54.268898] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:25:17.618 [2024-07-11 16:40:54.269049] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.618 [2024-07-11 16:40:54.269692] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.618 [2024-07-11 16:40:54.269821] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:17.618 [2024-07-11 16:40:54.270035] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:17.618 [2024-07-11 16:40:54.270066] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:17.618 [2024-07-11 16:40:54.270196] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:25:17.618 [2024-07-11 16:40:54.270209] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:17.618 [2024-07-11 16:40:54.270317] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:25:17.618 [2024-07-11 16:40:54.275467] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:25:17.618 [2024-07-11 16:40:54.275493] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:25:17.618 pt4 00:25:17.618 [2024-07-11 16:40:54.275765] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.618 16:40:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.876 16:40:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:17.876 "name": "raid_bdev1", 00:25:17.876 "uuid": "58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c", 00:25:17.876 "strip_size_kb": 64, 00:25:17.876 "state": "online", 00:25:17.876 "raid_level": "raid5f", 00:25:17.876 "superblock": true, 00:25:17.876 "num_base_bdevs": 4, 00:25:17.876 "num_base_bdevs_discovered": 3, 00:25:17.876 "num_base_bdevs_operational": 3, 00:25:17.876 "base_bdevs_list": [ 00:25:17.876 { 00:25:17.876 "name": null, 00:25:17.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:17.876 "is_configured": false, 00:25:17.876 "data_offset": 2048, 00:25:17.876 "data_size": 63488 00:25:17.876 }, 00:25:17.876 { 00:25:17.876 "name": "pt2", 00:25:17.876 "uuid": "381d9deb-361f-5979-a34e-fa2a8863b13d", 00:25:17.876 "is_configured": true, 00:25:17.876 "data_offset": 2048, 00:25:17.876 "data_size": 63488 00:25:17.876 }, 00:25:17.876 { 00:25:17.876 "name": "pt3", 00:25:17.876 "uuid": "f2af927c-9d89-596b-953b-b05a847d9d83", 00:25:17.876 "is_configured": true, 00:25:17.876 "data_offset": 2048, 00:25:17.876 "data_size": 63488 00:25:17.876 }, 00:25:17.876 { 00:25:17.876 "name": "pt4", 00:25:17.876 "uuid": "14c43481-8ab0-5e95-9e55-ec3e61921e45", 00:25:17.876 "is_configured": true, 00:25:17.876 "data_offset": 2048, 00:25:17.876 "data_size": 63488 00:25:17.876 } 00:25:17.876 ] 00:25:17.876 }' 00:25:17.876 16:40:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:17.876 16:40:54 -- common/autotest_common.sh@10 -- # set +x 00:25:18.444 16:40:55 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:25:18.444 16:40:55 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:18.703 [2024-07-11 16:40:55.441921] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:18.703 [2024-07-11 16:40:55.441951] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:18.703 [2024-07-11 16:40:55.442022] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:18.703 [2024-07-11 16:40:55.442086] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:18.703 [2024-07-11 16:40:55.442096] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:25:18.703 16:40:55 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.703 16:40:55 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:25:18.962 16:40:55 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:25:18.962 16:40:55 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:25:18.962 16:40:55 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:19.222 [2024-07-11 16:40:55.809980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:19.222 [2024-07-11 16:40:55.810440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.222 [2024-07-11 16:40:55.810589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:25:19.222 [2024-07-11 16:40:55.810713] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.222 [2024-07-11 16:40:55.812813] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.222 [2024-07-11 16:40:55.813003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:19.222 [2024-07-11 16:40:55.813209] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:25:19.222 [2024-07-11 16:40:55.813263] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:19.222 pt1 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.222 16:40:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.222 16:40:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:19.222 "name": "raid_bdev1", 00:25:19.222 "uuid": "58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c", 00:25:19.222 "strip_size_kb": 64, 00:25:19.222 "state": "configuring", 00:25:19.222 "raid_level": "raid5f", 00:25:19.222 "superblock": true, 00:25:19.222 "num_base_bdevs": 4, 00:25:19.222 "num_base_bdevs_discovered": 1, 00:25:19.222 "num_base_bdevs_operational": 4, 00:25:19.222 "base_bdevs_list": [ 00:25:19.222 { 00:25:19.222 "name": "pt1", 00:25:19.222 "uuid": "88204a5b-c57c-5fe5-8f31-0f96b157ad3b", 00:25:19.222 "is_configured": true, 00:25:19.222 "data_offset": 2048, 00:25:19.222 "data_size": 63488 00:25:19.222 }, 00:25:19.222 { 00:25:19.222 "name": null, 00:25:19.222 "uuid": "381d9deb-361f-5979-a34e-fa2a8863b13d", 00:25:19.222 "is_configured": false, 00:25:19.222 "data_offset": 2048, 00:25:19.222 "data_size": 63488 00:25:19.222 }, 00:25:19.222 { 00:25:19.222 "name": null, 00:25:19.222 "uuid": "f2af927c-9d89-596b-953b-b05a847d9d83", 00:25:19.222 "is_configured": false, 00:25:19.222 "data_offset": 2048, 00:25:19.222 "data_size": 63488 00:25:19.222 }, 00:25:19.222 { 00:25:19.222 "name": null, 00:25:19.222 "uuid": "14c43481-8ab0-5e95-9e55-ec3e61921e45", 00:25:19.222 "is_configured": false, 00:25:19.222 "data_offset": 2048, 00:25:19.222 "data_size": 63488 00:25:19.222 } 00:25:19.222 ] 00:25:19.222 }' 00:25:19.222 16:40:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:19.222 16:40:56 -- common/autotest_common.sh@10 -- # set +x 00:25:20.159 16:40:56 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:25:20.159 16:40:56 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:20.159 16:40:56 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:20.159 16:40:56 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:20.159 16:40:56 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:20.159 16:40:56 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:20.418 16:40:56 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:20.418 16:40:56 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:20.418 16:40:56 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:20.418 16:40:57 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:25:20.418 16:40:57 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:25:20.418 16:40:57 -- bdev/bdev_raid.sh@489 -- # i=3 00:25:20.418 16:40:57 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:20.677 [2024-07-11 16:40:57.433532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:20.677 [2024-07-11 16:40:57.433640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.677 [2024-07-11 16:40:57.433709] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:25:20.677 [2024-07-11 16:40:57.433753] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.677 [2024-07-11 16:40:57.434586] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.677 [2024-07-11 16:40:57.434668] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:20.677 [2024-07-11 16:40:57.434834] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:25:20.677 [2024-07-11 16:40:57.434876] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:20.677 [2024-07-11 16:40:57.434890] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:20.677 [2024-07-11 16:40:57.434937] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:25:20.677 [2024-07-11 16:40:57.435053] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:20.677 pt4 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.677 16:40:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.936 16:40:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:20.936 "name": "raid_bdev1", 00:25:20.936 "uuid": "58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c", 00:25:20.936 "strip_size_kb": 64, 00:25:20.936 "state": "configuring", 00:25:20.936 "raid_level": "raid5f", 00:25:20.936 "superblock": true, 00:25:20.936 "num_base_bdevs": 4, 00:25:20.936 "num_base_bdevs_discovered": 1, 00:25:20.936 "num_base_bdevs_operational": 3, 00:25:20.936 "base_bdevs_list": [ 00:25:20.936 { 00:25:20.936 "name": null, 00:25:20.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.936 "is_configured": false, 00:25:20.936 "data_offset": 2048, 00:25:20.936 "data_size": 63488 00:25:20.936 }, 00:25:20.936 { 00:25:20.936 "name": null, 00:25:20.936 "uuid": "381d9deb-361f-5979-a34e-fa2a8863b13d", 00:25:20.936 "is_configured": false, 00:25:20.936 "data_offset": 2048, 00:25:20.936 "data_size": 63488 00:25:20.936 }, 00:25:20.936 { 00:25:20.936 "name": null, 00:25:20.936 "uuid": "f2af927c-9d89-596b-953b-b05a847d9d83", 00:25:20.936 "is_configured": false, 00:25:20.936 "data_offset": 2048, 00:25:20.936 "data_size": 63488 00:25:20.936 }, 00:25:20.936 { 00:25:20.936 "name": "pt4", 00:25:20.936 "uuid": "14c43481-8ab0-5e95-9e55-ec3e61921e45", 00:25:20.936 "is_configured": true, 00:25:20.936 "data_offset": 2048, 00:25:20.936 "data_size": 63488 00:25:20.936 } 00:25:20.936 ] 00:25:20.936 }' 00:25:20.936 16:40:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:20.936 16:40:57 -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 16:40:58 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:25:21.872 16:40:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:21.872 16:40:58 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:21.872 [2024-07-11 16:40:58.565643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:21.872 [2024-07-11 16:40:58.565782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.872 [2024-07-11 16:40:58.565823] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:25:21.872 [2024-07-11 16:40:58.565848] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.872 [2024-07-11 16:40:58.566364] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.872 [2024-07-11 16:40:58.566452] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:21.872 [2024-07-11 16:40:58.566554] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:25:21.872 [2024-07-11 16:40:58.566595] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:21.872 pt2 00:25:21.872 16:40:58 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:21.872 16:40:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:21.872 16:40:58 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:22.131 [2024-07-11 16:40:58.769680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:22.131 [2024-07-11 16:40:58.769769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:22.131 [2024-07-11 16:40:58.769799] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:25:22.131 [2024-07-11 16:40:58.769828] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:22.131 [2024-07-11 16:40:58.770235] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:22.131 [2024-07-11 16:40:58.770290] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:22.131 [2024-07-11 16:40:58.770378] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:25:22.131 [2024-07-11 16:40:58.770432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:22.131 [2024-07-11 16:40:58.770552] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:25:22.131 [2024-07-11 16:40:58.770571] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:22.131 [2024-07-11 16:40:58.770664] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:25:22.131 [2024-07-11 16:40:58.775980] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:25:22.131 [2024-07-11 16:40:58.776003] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:25:22.131 [2024-07-11 16:40:58.776267] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:22.131 pt3 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.131 16:40:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.391 16:40:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:22.391 "name": "raid_bdev1", 00:25:22.391 "uuid": "58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c", 00:25:22.391 "strip_size_kb": 64, 00:25:22.391 "state": "online", 00:25:22.391 "raid_level": "raid5f", 00:25:22.391 "superblock": true, 00:25:22.391 "num_base_bdevs": 4, 00:25:22.391 "num_base_bdevs_discovered": 3, 00:25:22.391 "num_base_bdevs_operational": 3, 00:25:22.391 "base_bdevs_list": [ 00:25:22.391 { 00:25:22.391 "name": null, 00:25:22.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.391 "is_configured": false, 00:25:22.391 "data_offset": 2048, 00:25:22.391 "data_size": 63488 00:25:22.391 }, 00:25:22.391 { 00:25:22.391 "name": "pt2", 00:25:22.391 "uuid": "381d9deb-361f-5979-a34e-fa2a8863b13d", 00:25:22.391 "is_configured": true, 00:25:22.391 "data_offset": 2048, 00:25:22.391 "data_size": 63488 00:25:22.391 }, 00:25:22.391 { 00:25:22.391 "name": "pt3", 00:25:22.391 "uuid": "f2af927c-9d89-596b-953b-b05a847d9d83", 00:25:22.391 "is_configured": true, 00:25:22.391 "data_offset": 2048, 00:25:22.391 "data_size": 63488 00:25:22.391 }, 00:25:22.391 { 00:25:22.391 "name": "pt4", 00:25:22.391 "uuid": "14c43481-8ab0-5e95-9e55-ec3e61921e45", 00:25:22.391 "is_configured": true, 00:25:22.391 "data_offset": 2048, 00:25:22.391 "data_size": 63488 00:25:22.391 } 00:25:22.391 ] 00:25:22.391 }' 00:25:22.391 16:40:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:22.391 16:40:59 -- common/autotest_common.sh@10 -- # set +x 00:25:22.958 16:40:59 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:22.958 16:40:59 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:25:23.216 [2024-07-11 16:40:59.866877] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:23.216 16:40:59 -- bdev/bdev_raid.sh@506 -- # '[' 58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c '!=' 58339429-5a01-4bd7-9c5a-2aa9ce4f6e9c ']' 00:25:23.216 16:40:59 -- bdev/bdev_raid.sh@511 -- # killprocess 133924 00:25:23.216 16:40:59 -- common/autotest_common.sh@926 -- # '[' -z 133924 ']' 00:25:23.216 16:40:59 -- common/autotest_common.sh@930 -- # kill -0 133924 00:25:23.216 16:40:59 -- common/autotest_common.sh@931 -- # uname 00:25:23.216 16:40:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:23.216 16:40:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 133924 00:25:23.216 killing process with pid 133924 00:25:23.216 16:40:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:23.216 16:40:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:23.216 16:40:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 133924' 00:25:23.216 16:40:59 -- common/autotest_common.sh@945 -- # kill 133924 00:25:23.216 16:40:59 -- common/autotest_common.sh@950 -- # wait 133924 00:25:23.216 [2024-07-11 16:40:59.902498] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:23.216 [2024-07-11 16:40:59.902600] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:23.216 [2024-07-11 16:40:59.902724] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:23.216 [2024-07-11 16:40:59.902751] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:25:23.475 [2024-07-11 16:41:00.208125] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:24.409 ************************************ 00:25:24.409 END TEST raid5f_superblock_test 00:25:24.409 ************************************ 00:25:24.409 16:41:01 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:24.409 00:25:24.409 real 0m21.258s 00:25:24.409 user 0m39.565s 00:25:24.409 sys 0m2.154s 00:25:24.409 16:41:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.409 16:41:01 -- common/autotest_common.sh@10 -- # set +x 00:25:24.409 16:41:01 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:25:24.409 16:41:01 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:25:24.409 16:41:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:24.409 16:41:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:24.409 16:41:01 -- common/autotest_common.sh@10 -- # set +x 00:25:24.409 ************************************ 00:25:24.410 START TEST raid5f_rebuild_test 00:25:24.410 ************************************ 00:25:24.410 16:41:01 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 false false 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=134628 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134628 /var/tmp/spdk-raid.sock 00:25:24.410 16:41:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:24.410 16:41:01 -- common/autotest_common.sh@819 -- # '[' -z 134628 ']' 00:25:24.410 16:41:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:24.410 16:41:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:24.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:24.410 16:41:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:24.410 16:41:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:24.410 16:41:01 -- common/autotest_common.sh@10 -- # set +x 00:25:24.668 [2024-07-11 16:41:01.224729] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:24.669 [2024-07-11 16:41:01.224908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134628 ] 00:25:24.669 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:24.669 Zero copy mechanism will not be used. 00:25:24.669 [2024-07-11 16:41:01.388334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.928 [2024-07-11 16:41:01.549247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.928 [2024-07-11 16:41:01.711831] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:25.495 16:41:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:25.495 16:41:02 -- common/autotest_common.sh@852 -- # return 0 00:25:25.495 16:41:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:25.496 16:41:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:25.496 16:41:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:25.754 BaseBdev1 00:25:25.754 16:41:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:25.754 16:41:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:25.754 16:41:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:26.013 BaseBdev2 00:25:26.013 16:41:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:26.013 16:41:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:26.013 16:41:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:26.272 BaseBdev3 00:25:26.272 16:41:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:26.272 16:41:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:26.272 16:41:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:26.272 BaseBdev4 00:25:26.272 16:41:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:26.530 spare_malloc 00:25:26.530 16:41:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:26.787 spare_delay 00:25:26.787 16:41:03 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:27.044 [2024-07-11 16:41:03.664963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:27.044 [2024-07-11 16:41:03.665087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.044 [2024-07-11 16:41:03.665119] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:27.044 [2024-07-11 16:41:03.665160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.044 [2024-07-11 16:41:03.667092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.044 [2024-07-11 16:41:03.667137] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:27.044 spare 00:25:27.044 16:41:03 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:27.044 [2024-07-11 16:41:03.845008] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:27.044 [2024-07-11 16:41:03.846579] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:27.044 [2024-07-11 16:41:03.846630] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:27.044 [2024-07-11 16:41:03.846665] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:27.044 [2024-07-11 16:41:03.846733] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:25:27.044 [2024-07-11 16:41:03.846744] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:27.044 [2024-07-11 16:41:03.846884] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:25:27.044 [2024-07-11 16:41:03.851992] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:25:27.044 [2024-07-11 16:41:03.852018] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:25:27.044 [2024-07-11 16:41:03.852201] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.303 16:41:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.303 16:41:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:27.303 "name": "raid_bdev1", 00:25:27.303 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:27.303 "strip_size_kb": 64, 00:25:27.303 "state": "online", 00:25:27.303 "raid_level": "raid5f", 00:25:27.303 "superblock": false, 00:25:27.303 "num_base_bdevs": 4, 00:25:27.303 "num_base_bdevs_discovered": 4, 00:25:27.303 "num_base_bdevs_operational": 4, 00:25:27.303 "base_bdevs_list": [ 00:25:27.303 { 00:25:27.303 "name": "BaseBdev1", 00:25:27.303 "uuid": "8c64b51b-86a1-4896-9cd8-aea5668d25d5", 00:25:27.303 "is_configured": true, 00:25:27.303 "data_offset": 0, 00:25:27.303 "data_size": 65536 00:25:27.303 }, 00:25:27.303 { 00:25:27.303 "name": "BaseBdev2", 00:25:27.303 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:27.303 "is_configured": true, 00:25:27.303 "data_offset": 0, 00:25:27.303 "data_size": 65536 00:25:27.303 }, 00:25:27.303 { 00:25:27.303 "name": "BaseBdev3", 00:25:27.303 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:27.303 "is_configured": true, 00:25:27.303 "data_offset": 0, 00:25:27.303 "data_size": 65536 00:25:27.303 }, 00:25:27.303 { 00:25:27.303 "name": "BaseBdev4", 00:25:27.303 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:27.303 "is_configured": true, 00:25:27.303 "data_offset": 0, 00:25:27.303 "data_size": 65536 00:25:27.303 } 00:25:27.303 ] 00:25:27.303 }' 00:25:27.303 16:41:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:27.303 16:41:04 -- common/autotest_common.sh@10 -- # set +x 00:25:28.238 16:41:04 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:28.238 16:41:04 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:28.238 [2024-07-11 16:41:04.874830] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:28.238 16:41:04 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:25:28.238 16:41:04 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.238 16:41:04 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:28.509 16:41:05 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:28.509 16:41:05 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:28.509 16:41:05 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:28.509 16:41:05 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:28.509 16:41:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:28.509 16:41:05 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:28.509 16:41:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:28.509 16:41:05 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:28.509 16:41:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:28.509 16:41:05 -- bdev/nbd_common.sh@12 -- # local i 00:25:28.509 16:41:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:28.509 16:41:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:28.509 16:41:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:28.509 [2024-07-11 16:41:05.298811] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:28.777 /dev/nbd0 00:25:28.777 16:41:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:28.777 16:41:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:28.777 16:41:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:28.777 16:41:05 -- common/autotest_common.sh@857 -- # local i 00:25:28.777 16:41:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:28.777 16:41:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:28.777 16:41:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:28.777 16:41:05 -- common/autotest_common.sh@861 -- # break 00:25:28.777 16:41:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:28.777 16:41:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:28.777 16:41:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:28.777 1+0 records in 00:25:28.777 1+0 records out 00:25:28.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277888 s, 14.7 MB/s 00:25:28.777 16:41:05 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:28.777 16:41:05 -- common/autotest_common.sh@874 -- # size=4096 00:25:28.777 16:41:05 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:28.777 16:41:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:28.777 16:41:05 -- common/autotest_common.sh@877 -- # return 0 00:25:28.777 16:41:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:28.777 16:41:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:28.777 16:41:05 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:28.777 16:41:05 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:28.777 16:41:05 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:28.777 16:41:05 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:25:29.037 512+0 records in 00:25:29.037 512+0 records out 00:25:29.037 100663296 bytes (101 MB, 96 MiB) copied, 0.415061 s, 243 MB/s 00:25:29.037 16:41:05 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:29.037 16:41:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:29.037 16:41:05 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:29.037 16:41:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:29.037 16:41:05 -- bdev/nbd_common.sh@51 -- # local i 00:25:29.037 16:41:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:29.037 16:41:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:29.296 16:41:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:29.296 16:41:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:29.296 16:41:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:29.296 16:41:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:29.296 16:41:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:29.296 16:41:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:29.296 16:41:06 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:29.296 [2024-07-11 16:41:06.027285] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.554 16:41:06 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:29.554 16:41:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:29.554 16:41:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:29.554 16:41:06 -- bdev/nbd_common.sh@41 -- # break 00:25:29.554 16:41:06 -- bdev/nbd_common.sh@45 -- # return 0 00:25:29.554 16:41:06 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:29.812 [2024-07-11 16:41:06.375137] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:29.812 16:41:06 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:29.812 16:41:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:29.812 16:41:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:29.812 16:41:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:29.812 16:41:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:29.813 16:41:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:29.813 16:41:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:29.813 16:41:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:29.813 16:41:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:29.813 16:41:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:29.813 16:41:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.813 16:41:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.813 16:41:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:29.813 "name": "raid_bdev1", 00:25:29.813 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:29.813 "strip_size_kb": 64, 00:25:29.813 "state": "online", 00:25:29.813 "raid_level": "raid5f", 00:25:29.813 "superblock": false, 00:25:29.813 "num_base_bdevs": 4, 00:25:29.813 "num_base_bdevs_discovered": 3, 00:25:29.813 "num_base_bdevs_operational": 3, 00:25:29.813 "base_bdevs_list": [ 00:25:29.813 { 00:25:29.813 "name": null, 00:25:29.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:29.813 "is_configured": false, 00:25:29.813 "data_offset": 0, 00:25:29.813 "data_size": 65536 00:25:29.813 }, 00:25:29.813 { 00:25:29.813 "name": "BaseBdev2", 00:25:29.813 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:29.813 "is_configured": true, 00:25:29.813 "data_offset": 0, 00:25:29.813 "data_size": 65536 00:25:29.813 }, 00:25:29.813 { 00:25:29.813 "name": "BaseBdev3", 00:25:29.813 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:29.813 "is_configured": true, 00:25:29.813 "data_offset": 0, 00:25:29.813 "data_size": 65536 00:25:29.813 }, 00:25:29.813 { 00:25:29.813 "name": "BaseBdev4", 00:25:29.813 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:29.813 "is_configured": true, 00:25:29.813 "data_offset": 0, 00:25:29.813 "data_size": 65536 00:25:29.813 } 00:25:29.813 ] 00:25:29.813 }' 00:25:29.813 16:41:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:29.813 16:41:06 -- common/autotest_common.sh@10 -- # set +x 00:25:30.379 16:41:07 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:30.638 [2024-07-11 16:41:07.351318] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:30.638 [2024-07-11 16:41:07.351369] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:30.638 [2024-07-11 16:41:07.361487] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cc70 00:25:30.638 [2024-07-11 16:41:07.368076] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:30.638 16:41:07 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:31.574 16:41:08 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:31.574 16:41:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:31.574 16:41:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:31.574 16:41:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:31.574 16:41:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:31.574 16:41:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.574 16:41:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.832 16:41:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:31.832 "name": "raid_bdev1", 00:25:31.832 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:31.832 "strip_size_kb": 64, 00:25:31.832 "state": "online", 00:25:31.832 "raid_level": "raid5f", 00:25:31.832 "superblock": false, 00:25:31.832 "num_base_bdevs": 4, 00:25:31.832 "num_base_bdevs_discovered": 4, 00:25:31.832 "num_base_bdevs_operational": 4, 00:25:31.832 "process": { 00:25:31.832 "type": "rebuild", 00:25:31.832 "target": "spare", 00:25:31.832 "progress": { 00:25:31.832 "blocks": 23040, 00:25:31.832 "percent": 11 00:25:31.832 } 00:25:31.832 }, 00:25:31.832 "base_bdevs_list": [ 00:25:31.832 { 00:25:31.832 "name": "spare", 00:25:31.832 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:31.832 "is_configured": true, 00:25:31.832 "data_offset": 0, 00:25:31.832 "data_size": 65536 00:25:31.832 }, 00:25:31.832 { 00:25:31.832 "name": "BaseBdev2", 00:25:31.832 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:31.832 "is_configured": true, 00:25:31.832 "data_offset": 0, 00:25:31.832 "data_size": 65536 00:25:31.832 }, 00:25:31.832 { 00:25:31.832 "name": "BaseBdev3", 00:25:31.832 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:31.832 "is_configured": true, 00:25:31.832 "data_offset": 0, 00:25:31.832 "data_size": 65536 00:25:31.832 }, 00:25:31.832 { 00:25:31.832 "name": "BaseBdev4", 00:25:31.832 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:31.832 "is_configured": true, 00:25:31.832 "data_offset": 0, 00:25:31.833 "data_size": 65536 00:25:31.833 } 00:25:31.833 ] 00:25:31.833 }' 00:25:31.833 16:41:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:32.091 16:41:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:32.091 16:41:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:32.091 16:41:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:32.091 16:41:08 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:32.350 [2024-07-11 16:41:08.965083] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:32.350 [2024-07-11 16:41:08.978170] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:32.350 [2024-07-11 16:41:08.978281] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:32.350 16:41:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:32.609 16:41:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:32.609 "name": "raid_bdev1", 00:25:32.609 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:32.609 "strip_size_kb": 64, 00:25:32.609 "state": "online", 00:25:32.609 "raid_level": "raid5f", 00:25:32.609 "superblock": false, 00:25:32.609 "num_base_bdevs": 4, 00:25:32.609 "num_base_bdevs_discovered": 3, 00:25:32.609 "num_base_bdevs_operational": 3, 00:25:32.609 "base_bdevs_list": [ 00:25:32.609 { 00:25:32.609 "name": null, 00:25:32.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.609 "is_configured": false, 00:25:32.609 "data_offset": 0, 00:25:32.609 "data_size": 65536 00:25:32.609 }, 00:25:32.609 { 00:25:32.609 "name": "BaseBdev2", 00:25:32.609 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:32.609 "is_configured": true, 00:25:32.609 "data_offset": 0, 00:25:32.609 "data_size": 65536 00:25:32.609 }, 00:25:32.609 { 00:25:32.609 "name": "BaseBdev3", 00:25:32.609 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:32.609 "is_configured": true, 00:25:32.609 "data_offset": 0, 00:25:32.609 "data_size": 65536 00:25:32.609 }, 00:25:32.609 { 00:25:32.609 "name": "BaseBdev4", 00:25:32.609 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:32.609 "is_configured": true, 00:25:32.609 "data_offset": 0, 00:25:32.609 "data_size": 65536 00:25:32.609 } 00:25:32.609 ] 00:25:32.609 }' 00:25:32.609 16:41:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:32.609 16:41:09 -- common/autotest_common.sh@10 -- # set +x 00:25:33.544 16:41:10 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:33.544 16:41:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:33.544 16:41:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:33.544 16:41:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:33.544 16:41:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:33.544 16:41:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.544 16:41:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.544 16:41:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:33.544 "name": "raid_bdev1", 00:25:33.544 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:33.544 "strip_size_kb": 64, 00:25:33.544 "state": "online", 00:25:33.544 "raid_level": "raid5f", 00:25:33.544 "superblock": false, 00:25:33.544 "num_base_bdevs": 4, 00:25:33.544 "num_base_bdevs_discovered": 3, 00:25:33.544 "num_base_bdevs_operational": 3, 00:25:33.544 "base_bdevs_list": [ 00:25:33.544 { 00:25:33.544 "name": null, 00:25:33.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.544 "is_configured": false, 00:25:33.544 "data_offset": 0, 00:25:33.544 "data_size": 65536 00:25:33.544 }, 00:25:33.544 { 00:25:33.544 "name": "BaseBdev2", 00:25:33.544 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:33.544 "is_configured": true, 00:25:33.544 "data_offset": 0, 00:25:33.544 "data_size": 65536 00:25:33.544 }, 00:25:33.544 { 00:25:33.544 "name": "BaseBdev3", 00:25:33.544 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:33.544 "is_configured": true, 00:25:33.544 "data_offset": 0, 00:25:33.544 "data_size": 65536 00:25:33.544 }, 00:25:33.544 { 00:25:33.544 "name": "BaseBdev4", 00:25:33.544 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:33.544 "is_configured": true, 00:25:33.544 "data_offset": 0, 00:25:33.544 "data_size": 65536 00:25:33.544 } 00:25:33.544 ] 00:25:33.544 }' 00:25:33.544 16:41:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:33.544 16:41:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:33.544 16:41:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:33.802 16:41:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:33.802 16:41:10 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:34.061 [2024-07-11 16:41:10.664313] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:34.061 [2024-07-11 16:41:10.664373] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:34.061 [2024-07-11 16:41:10.677328] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ce10 00:25:34.061 [2024-07-11 16:41:10.686128] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:34.061 16:41:10 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:34.995 16:41:11 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:34.995 16:41:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:34.995 16:41:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:34.995 16:41:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:34.995 16:41:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:34.995 16:41:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.995 16:41:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.252 16:41:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:35.252 "name": "raid_bdev1", 00:25:35.252 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:35.252 "strip_size_kb": 64, 00:25:35.252 "state": "online", 00:25:35.252 "raid_level": "raid5f", 00:25:35.252 "superblock": false, 00:25:35.252 "num_base_bdevs": 4, 00:25:35.252 "num_base_bdevs_discovered": 4, 00:25:35.252 "num_base_bdevs_operational": 4, 00:25:35.252 "process": { 00:25:35.252 "type": "rebuild", 00:25:35.252 "target": "spare", 00:25:35.252 "progress": { 00:25:35.252 "blocks": 23040, 00:25:35.252 "percent": 11 00:25:35.252 } 00:25:35.252 }, 00:25:35.252 "base_bdevs_list": [ 00:25:35.252 { 00:25:35.252 "name": "spare", 00:25:35.252 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:35.252 "is_configured": true, 00:25:35.252 "data_offset": 0, 00:25:35.253 "data_size": 65536 00:25:35.253 }, 00:25:35.253 { 00:25:35.253 "name": "BaseBdev2", 00:25:35.253 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:35.253 "is_configured": true, 00:25:35.253 "data_offset": 0, 00:25:35.253 "data_size": 65536 00:25:35.253 }, 00:25:35.253 { 00:25:35.253 "name": "BaseBdev3", 00:25:35.253 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:35.253 "is_configured": true, 00:25:35.253 "data_offset": 0, 00:25:35.253 "data_size": 65536 00:25:35.253 }, 00:25:35.253 { 00:25:35.253 "name": "BaseBdev4", 00:25:35.253 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:35.253 "is_configured": true, 00:25:35.253 "data_offset": 0, 00:25:35.253 "data_size": 65536 00:25:35.253 } 00:25:35.253 ] 00:25:35.253 }' 00:25:35.253 16:41:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:35.253 16:41:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:35.253 16:41:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@657 -- # local timeout=688 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.511 16:41:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.769 16:41:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:35.769 "name": "raid_bdev1", 00:25:35.769 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:35.769 "strip_size_kb": 64, 00:25:35.769 "state": "online", 00:25:35.769 "raid_level": "raid5f", 00:25:35.769 "superblock": false, 00:25:35.769 "num_base_bdevs": 4, 00:25:35.769 "num_base_bdevs_discovered": 4, 00:25:35.769 "num_base_bdevs_operational": 4, 00:25:35.769 "process": { 00:25:35.769 "type": "rebuild", 00:25:35.769 "target": "spare", 00:25:35.769 "progress": { 00:25:35.769 "blocks": 30720, 00:25:35.769 "percent": 15 00:25:35.769 } 00:25:35.769 }, 00:25:35.769 "base_bdevs_list": [ 00:25:35.769 { 00:25:35.769 "name": "spare", 00:25:35.769 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:35.769 "is_configured": true, 00:25:35.769 "data_offset": 0, 00:25:35.769 "data_size": 65536 00:25:35.769 }, 00:25:35.769 { 00:25:35.769 "name": "BaseBdev2", 00:25:35.769 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:35.769 "is_configured": true, 00:25:35.769 "data_offset": 0, 00:25:35.769 "data_size": 65536 00:25:35.769 }, 00:25:35.769 { 00:25:35.769 "name": "BaseBdev3", 00:25:35.769 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:35.769 "is_configured": true, 00:25:35.769 "data_offset": 0, 00:25:35.769 "data_size": 65536 00:25:35.769 }, 00:25:35.769 { 00:25:35.769 "name": "BaseBdev4", 00:25:35.769 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:35.769 "is_configured": true, 00:25:35.769 "data_offset": 0, 00:25:35.769 "data_size": 65536 00:25:35.769 } 00:25:35.769 ] 00:25:35.769 }' 00:25:35.769 16:41:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:35.769 16:41:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:35.769 16:41:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:35.769 16:41:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:35.769 16:41:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:36.701 16:41:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:36.701 16:41:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.701 16:41:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:36.701 16:41:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:36.701 16:41:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:36.701 16:41:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:36.701 16:41:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.701 16:41:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.958 16:41:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:36.958 "name": "raid_bdev1", 00:25:36.958 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:36.958 "strip_size_kb": 64, 00:25:36.958 "state": "online", 00:25:36.958 "raid_level": "raid5f", 00:25:36.958 "superblock": false, 00:25:36.958 "num_base_bdevs": 4, 00:25:36.958 "num_base_bdevs_discovered": 4, 00:25:36.958 "num_base_bdevs_operational": 4, 00:25:36.958 "process": { 00:25:36.958 "type": "rebuild", 00:25:36.958 "target": "spare", 00:25:36.958 "progress": { 00:25:36.958 "blocks": 57600, 00:25:36.958 "percent": 29 00:25:36.958 } 00:25:36.958 }, 00:25:36.958 "base_bdevs_list": [ 00:25:36.958 { 00:25:36.958 "name": "spare", 00:25:36.958 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:36.958 "is_configured": true, 00:25:36.958 "data_offset": 0, 00:25:36.958 "data_size": 65536 00:25:36.958 }, 00:25:36.958 { 00:25:36.958 "name": "BaseBdev2", 00:25:36.958 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:36.958 "is_configured": true, 00:25:36.958 "data_offset": 0, 00:25:36.958 "data_size": 65536 00:25:36.958 }, 00:25:36.958 { 00:25:36.958 "name": "BaseBdev3", 00:25:36.958 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:36.958 "is_configured": true, 00:25:36.958 "data_offset": 0, 00:25:36.958 "data_size": 65536 00:25:36.958 }, 00:25:36.958 { 00:25:36.958 "name": "BaseBdev4", 00:25:36.958 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:36.958 "is_configured": true, 00:25:36.958 "data_offset": 0, 00:25:36.959 "data_size": 65536 00:25:36.959 } 00:25:36.959 ] 00:25:36.959 }' 00:25:36.959 16:41:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:37.217 16:41:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:37.217 16:41:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:37.217 16:41:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:37.217 16:41:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:38.150 16:41:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:38.150 16:41:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.150 16:41:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:38.150 16:41:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:38.150 16:41:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:38.150 16:41:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:38.150 16:41:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.150 16:41:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.408 16:41:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:38.408 "name": "raid_bdev1", 00:25:38.408 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:38.408 "strip_size_kb": 64, 00:25:38.408 "state": "online", 00:25:38.408 "raid_level": "raid5f", 00:25:38.408 "superblock": false, 00:25:38.408 "num_base_bdevs": 4, 00:25:38.408 "num_base_bdevs_discovered": 4, 00:25:38.408 "num_base_bdevs_operational": 4, 00:25:38.408 "process": { 00:25:38.408 "type": "rebuild", 00:25:38.408 "target": "spare", 00:25:38.408 "progress": { 00:25:38.408 "blocks": 82560, 00:25:38.408 "percent": 41 00:25:38.408 } 00:25:38.408 }, 00:25:38.408 "base_bdevs_list": [ 00:25:38.408 { 00:25:38.408 "name": "spare", 00:25:38.408 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:38.408 "is_configured": true, 00:25:38.408 "data_offset": 0, 00:25:38.408 "data_size": 65536 00:25:38.408 }, 00:25:38.408 { 00:25:38.408 "name": "BaseBdev2", 00:25:38.408 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:38.408 "is_configured": true, 00:25:38.408 "data_offset": 0, 00:25:38.408 "data_size": 65536 00:25:38.409 }, 00:25:38.409 { 00:25:38.409 "name": "BaseBdev3", 00:25:38.409 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:38.409 "is_configured": true, 00:25:38.409 "data_offset": 0, 00:25:38.409 "data_size": 65536 00:25:38.409 }, 00:25:38.409 { 00:25:38.409 "name": "BaseBdev4", 00:25:38.409 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:38.409 "is_configured": true, 00:25:38.409 "data_offset": 0, 00:25:38.409 "data_size": 65536 00:25:38.409 } 00:25:38.409 ] 00:25:38.409 }' 00:25:38.409 16:41:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:38.409 16:41:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:38.409 16:41:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:38.666 16:41:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.666 16:41:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:39.600 16:41:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:39.600 16:41:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.600 16:41:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:39.600 16:41:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:39.600 16:41:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:39.600 16:41:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:39.600 16:41:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.600 16:41:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.858 16:41:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:39.858 "name": "raid_bdev1", 00:25:39.858 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:39.858 "strip_size_kb": 64, 00:25:39.858 "state": "online", 00:25:39.858 "raid_level": "raid5f", 00:25:39.858 "superblock": false, 00:25:39.858 "num_base_bdevs": 4, 00:25:39.858 "num_base_bdevs_discovered": 4, 00:25:39.858 "num_base_bdevs_operational": 4, 00:25:39.858 "process": { 00:25:39.858 "type": "rebuild", 00:25:39.858 "target": "spare", 00:25:39.858 "progress": { 00:25:39.858 "blocks": 109440, 00:25:39.858 "percent": 55 00:25:39.858 } 00:25:39.858 }, 00:25:39.858 "base_bdevs_list": [ 00:25:39.858 { 00:25:39.858 "name": "spare", 00:25:39.858 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:39.858 "is_configured": true, 00:25:39.858 "data_offset": 0, 00:25:39.858 "data_size": 65536 00:25:39.858 }, 00:25:39.858 { 00:25:39.858 "name": "BaseBdev2", 00:25:39.858 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:39.858 "is_configured": true, 00:25:39.858 "data_offset": 0, 00:25:39.859 "data_size": 65536 00:25:39.859 }, 00:25:39.859 { 00:25:39.859 "name": "BaseBdev3", 00:25:39.859 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:39.859 "is_configured": true, 00:25:39.859 "data_offset": 0, 00:25:39.859 "data_size": 65536 00:25:39.859 }, 00:25:39.859 { 00:25:39.859 "name": "BaseBdev4", 00:25:39.859 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:39.859 "is_configured": true, 00:25:39.859 "data_offset": 0, 00:25:39.859 "data_size": 65536 00:25:39.859 } 00:25:39.859 ] 00:25:39.859 }' 00:25:39.859 16:41:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:39.859 16:41:16 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.859 16:41:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:39.859 16:41:16 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.859 16:41:16 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:40.795 16:41:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:40.795 16:41:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:40.795 16:41:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:40.795 16:41:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:40.795 16:41:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:40.795 16:41:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:40.795 16:41:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.796 16:41:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:41.054 16:41:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:41.054 "name": "raid_bdev1", 00:25:41.054 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:41.054 "strip_size_kb": 64, 00:25:41.054 "state": "online", 00:25:41.054 "raid_level": "raid5f", 00:25:41.055 "superblock": false, 00:25:41.055 "num_base_bdevs": 4, 00:25:41.055 "num_base_bdevs_discovered": 4, 00:25:41.055 "num_base_bdevs_operational": 4, 00:25:41.055 "process": { 00:25:41.055 "type": "rebuild", 00:25:41.055 "target": "spare", 00:25:41.055 "progress": { 00:25:41.055 "blocks": 134400, 00:25:41.055 "percent": 68 00:25:41.055 } 00:25:41.055 }, 00:25:41.055 "base_bdevs_list": [ 00:25:41.055 { 00:25:41.055 "name": "spare", 00:25:41.055 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:41.055 "is_configured": true, 00:25:41.055 "data_offset": 0, 00:25:41.055 "data_size": 65536 00:25:41.055 }, 00:25:41.055 { 00:25:41.055 "name": "BaseBdev2", 00:25:41.055 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:41.055 "is_configured": true, 00:25:41.055 "data_offset": 0, 00:25:41.055 "data_size": 65536 00:25:41.055 }, 00:25:41.055 { 00:25:41.055 "name": "BaseBdev3", 00:25:41.055 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:41.055 "is_configured": true, 00:25:41.055 "data_offset": 0, 00:25:41.055 "data_size": 65536 00:25:41.055 }, 00:25:41.055 { 00:25:41.055 "name": "BaseBdev4", 00:25:41.055 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:41.055 "is_configured": true, 00:25:41.055 "data_offset": 0, 00:25:41.055 "data_size": 65536 00:25:41.055 } 00:25:41.055 ] 00:25:41.055 }' 00:25:41.055 16:41:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:41.314 16:41:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:41.314 16:41:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:41.314 16:41:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:41.314 16:41:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:42.250 16:41:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:42.250 16:41:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.250 16:41:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:42.250 16:41:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:42.250 16:41:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:42.250 16:41:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:42.250 16:41:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.250 16:41:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.508 16:41:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:42.508 "name": "raid_bdev1", 00:25:42.508 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:42.508 "strip_size_kb": 64, 00:25:42.508 "state": "online", 00:25:42.508 "raid_level": "raid5f", 00:25:42.508 "superblock": false, 00:25:42.508 "num_base_bdevs": 4, 00:25:42.508 "num_base_bdevs_discovered": 4, 00:25:42.508 "num_base_bdevs_operational": 4, 00:25:42.508 "process": { 00:25:42.508 "type": "rebuild", 00:25:42.508 "target": "spare", 00:25:42.508 "progress": { 00:25:42.508 "blocks": 161280, 00:25:42.508 "percent": 82 00:25:42.508 } 00:25:42.508 }, 00:25:42.508 "base_bdevs_list": [ 00:25:42.508 { 00:25:42.508 "name": "spare", 00:25:42.508 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:42.508 "is_configured": true, 00:25:42.508 "data_offset": 0, 00:25:42.508 "data_size": 65536 00:25:42.508 }, 00:25:42.509 { 00:25:42.509 "name": "BaseBdev2", 00:25:42.509 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:42.509 "is_configured": true, 00:25:42.509 "data_offset": 0, 00:25:42.509 "data_size": 65536 00:25:42.509 }, 00:25:42.509 { 00:25:42.509 "name": "BaseBdev3", 00:25:42.509 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:42.509 "is_configured": true, 00:25:42.509 "data_offset": 0, 00:25:42.509 "data_size": 65536 00:25:42.509 }, 00:25:42.509 { 00:25:42.509 "name": "BaseBdev4", 00:25:42.509 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:42.509 "is_configured": true, 00:25:42.509 "data_offset": 0, 00:25:42.509 "data_size": 65536 00:25:42.509 } 00:25:42.509 ] 00:25:42.509 }' 00:25:42.509 16:41:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:42.509 16:41:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:42.509 16:41:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:42.509 16:41:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:42.509 16:41:19 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:43.885 "name": "raid_bdev1", 00:25:43.885 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:43.885 "strip_size_kb": 64, 00:25:43.885 "state": "online", 00:25:43.885 "raid_level": "raid5f", 00:25:43.885 "superblock": false, 00:25:43.885 "num_base_bdevs": 4, 00:25:43.885 "num_base_bdevs_discovered": 4, 00:25:43.885 "num_base_bdevs_operational": 4, 00:25:43.885 "process": { 00:25:43.885 "type": "rebuild", 00:25:43.885 "target": "spare", 00:25:43.885 "progress": { 00:25:43.885 "blocks": 186240, 00:25:43.885 "percent": 94 00:25:43.885 } 00:25:43.885 }, 00:25:43.885 "base_bdevs_list": [ 00:25:43.885 { 00:25:43.885 "name": "spare", 00:25:43.885 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:43.885 "is_configured": true, 00:25:43.885 "data_offset": 0, 00:25:43.885 "data_size": 65536 00:25:43.885 }, 00:25:43.885 { 00:25:43.885 "name": "BaseBdev2", 00:25:43.885 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:43.885 "is_configured": true, 00:25:43.885 "data_offset": 0, 00:25:43.885 "data_size": 65536 00:25:43.885 }, 00:25:43.885 { 00:25:43.885 "name": "BaseBdev3", 00:25:43.885 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:43.885 "is_configured": true, 00:25:43.885 "data_offset": 0, 00:25:43.885 "data_size": 65536 00:25:43.885 }, 00:25:43.885 { 00:25:43.885 "name": "BaseBdev4", 00:25:43.885 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:43.885 "is_configured": true, 00:25:43.885 "data_offset": 0, 00:25:43.885 "data_size": 65536 00:25:43.885 } 00:25:43.885 ] 00:25:43.885 }' 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:43.885 16:41:20 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:44.453 [2024-07-11 16:41:21.058469] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:44.453 [2024-07-11 16:41:21.058558] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:44.453 [2024-07-11 16:41:21.058663] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:45.020 16:41:21 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:45.020 16:41:21 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:45.020 16:41:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:45.020 16:41:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:45.020 16:41:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:45.020 16:41:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:45.020 16:41:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.020 16:41:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.279 16:41:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:45.279 "name": "raid_bdev1", 00:25:45.279 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:45.279 "strip_size_kb": 64, 00:25:45.279 "state": "online", 00:25:45.279 "raid_level": "raid5f", 00:25:45.279 "superblock": false, 00:25:45.279 "num_base_bdevs": 4, 00:25:45.279 "num_base_bdevs_discovered": 4, 00:25:45.279 "num_base_bdevs_operational": 4, 00:25:45.279 "base_bdevs_list": [ 00:25:45.279 { 00:25:45.279 "name": "spare", 00:25:45.279 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:45.279 "is_configured": true, 00:25:45.279 "data_offset": 0, 00:25:45.279 "data_size": 65536 00:25:45.279 }, 00:25:45.279 { 00:25:45.279 "name": "BaseBdev2", 00:25:45.279 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:45.279 "is_configured": true, 00:25:45.279 "data_offset": 0, 00:25:45.279 "data_size": 65536 00:25:45.279 }, 00:25:45.279 { 00:25:45.279 "name": "BaseBdev3", 00:25:45.279 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:45.279 "is_configured": true, 00:25:45.279 "data_offset": 0, 00:25:45.279 "data_size": 65536 00:25:45.279 }, 00:25:45.279 { 00:25:45.279 "name": "BaseBdev4", 00:25:45.279 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:45.279 "is_configured": true, 00:25:45.279 "data_offset": 0, 00:25:45.279 "data_size": 65536 00:25:45.279 } 00:25:45.279 ] 00:25:45.279 }' 00:25:45.279 16:41:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:45.279 16:41:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:45.279 16:41:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:45.279 16:41:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:45.279 16:41:22 -- bdev/bdev_raid.sh@660 -- # break 00:25:45.279 16:41:22 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:45.279 16:41:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:45.279 16:41:22 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:45.279 16:41:22 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:45.279 16:41:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:45.279 16:41:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.279 16:41:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.583 16:41:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:45.583 "name": "raid_bdev1", 00:25:45.583 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:45.583 "strip_size_kb": 64, 00:25:45.583 "state": "online", 00:25:45.583 "raid_level": "raid5f", 00:25:45.583 "superblock": false, 00:25:45.583 "num_base_bdevs": 4, 00:25:45.583 "num_base_bdevs_discovered": 4, 00:25:45.583 "num_base_bdevs_operational": 4, 00:25:45.583 "base_bdevs_list": [ 00:25:45.583 { 00:25:45.583 "name": "spare", 00:25:45.583 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:45.583 "is_configured": true, 00:25:45.583 "data_offset": 0, 00:25:45.583 "data_size": 65536 00:25:45.583 }, 00:25:45.583 { 00:25:45.583 "name": "BaseBdev2", 00:25:45.583 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:45.583 "is_configured": true, 00:25:45.583 "data_offset": 0, 00:25:45.583 "data_size": 65536 00:25:45.583 }, 00:25:45.583 { 00:25:45.583 "name": "BaseBdev3", 00:25:45.583 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:45.583 "is_configured": true, 00:25:45.583 "data_offset": 0, 00:25:45.583 "data_size": 65536 00:25:45.583 }, 00:25:45.583 { 00:25:45.583 "name": "BaseBdev4", 00:25:45.583 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:45.583 "is_configured": true, 00:25:45.583 "data_offset": 0, 00:25:45.583 "data_size": 65536 00:25:45.583 } 00:25:45.583 ] 00:25:45.583 }' 00:25:45.583 16:41:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:45.891 "name": "raid_bdev1", 00:25:45.891 "uuid": "bcb653b5-87ef-4976-8275-784986bc9b1e", 00:25:45.891 "strip_size_kb": 64, 00:25:45.891 "state": "online", 00:25:45.891 "raid_level": "raid5f", 00:25:45.891 "superblock": false, 00:25:45.891 "num_base_bdevs": 4, 00:25:45.891 "num_base_bdevs_discovered": 4, 00:25:45.891 "num_base_bdevs_operational": 4, 00:25:45.891 "base_bdevs_list": [ 00:25:45.891 { 00:25:45.891 "name": "spare", 00:25:45.891 "uuid": "30f9e1fe-71f2-52c8-b2d8-b7772720cebf", 00:25:45.891 "is_configured": true, 00:25:45.891 "data_offset": 0, 00:25:45.891 "data_size": 65536 00:25:45.891 }, 00:25:45.891 { 00:25:45.891 "name": "BaseBdev2", 00:25:45.891 "uuid": "165808f6-f8b2-440e-a6af-9f5710aa1fa4", 00:25:45.891 "is_configured": true, 00:25:45.891 "data_offset": 0, 00:25:45.891 "data_size": 65536 00:25:45.891 }, 00:25:45.891 { 00:25:45.891 "name": "BaseBdev3", 00:25:45.891 "uuid": "b54e1ec2-8695-4c62-bc76-5c1a31548ffe", 00:25:45.891 "is_configured": true, 00:25:45.891 "data_offset": 0, 00:25:45.891 "data_size": 65536 00:25:45.891 }, 00:25:45.891 { 00:25:45.891 "name": "BaseBdev4", 00:25:45.891 "uuid": "ba4a298d-bb94-45c9-b6fb-28655212ff4c", 00:25:45.891 "is_configured": true, 00:25:45.891 "data_offset": 0, 00:25:45.891 "data_size": 65536 00:25:45.891 } 00:25:45.891 ] 00:25:45.891 }' 00:25:45.891 16:41:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:45.891 16:41:22 -- common/autotest_common.sh@10 -- # set +x 00:25:46.842 16:41:23 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:46.842 [2024-07-11 16:41:23.630088] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:46.842 [2024-07-11 16:41:23.630130] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:46.842 [2024-07-11 16:41:23.630238] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:46.842 [2024-07-11 16:41:23.630326] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:46.842 [2024-07-11 16:41:23.630340] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:25:46.842 16:41:23 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.842 16:41:23 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:47.099 16:41:23 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:47.099 16:41:23 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:47.099 16:41:23 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:47.099 16:41:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:47.099 16:41:23 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:47.099 16:41:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:47.099 16:41:23 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:47.099 16:41:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:47.099 16:41:23 -- bdev/nbd_common.sh@12 -- # local i 00:25:47.099 16:41:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:47.099 16:41:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:47.099 16:41:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:47.358 /dev/nbd0 00:25:47.358 16:41:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:47.358 16:41:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:47.358 16:41:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:47.358 16:41:24 -- common/autotest_common.sh@857 -- # local i 00:25:47.358 16:41:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:47.358 16:41:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:47.358 16:41:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:47.358 16:41:24 -- common/autotest_common.sh@861 -- # break 00:25:47.358 16:41:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:47.358 16:41:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:47.358 16:41:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:47.358 1+0 records in 00:25:47.358 1+0 records out 00:25:47.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227893 s, 18.0 MB/s 00:25:47.358 16:41:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:47.358 16:41:24 -- common/autotest_common.sh@874 -- # size=4096 00:25:47.358 16:41:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:47.358 16:41:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:47.358 16:41:24 -- common/autotest_common.sh@877 -- # return 0 00:25:47.358 16:41:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:47.358 16:41:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:47.358 16:41:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:47.616 /dev/nbd1 00:25:47.616 16:41:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:47.616 16:41:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:47.616 16:41:24 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:25:47.616 16:41:24 -- common/autotest_common.sh@857 -- # local i 00:25:47.616 16:41:24 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:47.616 16:41:24 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:47.616 16:41:24 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:25:47.616 16:41:24 -- common/autotest_common.sh@861 -- # break 00:25:47.616 16:41:24 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:47.616 16:41:24 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:47.616 16:41:24 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:47.616 1+0 records in 00:25:47.616 1+0 records out 00:25:47.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407402 s, 10.1 MB/s 00:25:47.616 16:41:24 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:47.616 16:41:24 -- common/autotest_common.sh@874 -- # size=4096 00:25:47.616 16:41:24 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:47.616 16:41:24 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:47.616 16:41:24 -- common/autotest_common.sh@877 -- # return 0 00:25:47.616 16:41:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:47.616 16:41:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:47.616 16:41:24 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:47.875 16:41:24 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:47.875 16:41:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:47.875 16:41:24 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:47.875 16:41:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:47.875 16:41:24 -- bdev/nbd_common.sh@51 -- # local i 00:25:47.875 16:41:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:47.875 16:41:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@41 -- # break 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@45 -- # return 0 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:48.134 16:41:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:48.393 16:41:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:48.393 16:41:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:48.393 16:41:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:48.393 16:41:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:48.393 16:41:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.393 16:41:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:48.393 16:41:25 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:48.651 16:41:25 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:48.651 16:41:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:48.651 16:41:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:48.651 16:41:25 -- bdev/nbd_common.sh@41 -- # break 00:25:48.651 16:41:25 -- bdev/nbd_common.sh@45 -- # return 0 00:25:48.651 16:41:25 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:48.651 16:41:25 -- bdev/bdev_raid.sh@709 -- # killprocess 134628 00:25:48.651 16:41:25 -- common/autotest_common.sh@926 -- # '[' -z 134628 ']' 00:25:48.651 16:41:25 -- common/autotest_common.sh@930 -- # kill -0 134628 00:25:48.651 16:41:25 -- common/autotest_common.sh@931 -- # uname 00:25:48.651 16:41:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:48.651 16:41:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 134628 00:25:48.651 16:41:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:48.651 16:41:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:48.651 16:41:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 134628' 00:25:48.651 killing process with pid 134628 00:25:48.651 16:41:25 -- common/autotest_common.sh@945 -- # kill 134628 00:25:48.651 Received shutdown signal, test time was about 60.000000 seconds 00:25:48.651 00:25:48.651 Latency(us) 00:25:48.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.651 =================================================================================================================== 00:25:48.651 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:48.651 [2024-07-11 16:41:25.246056] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:48.651 16:41:25 -- common/autotest_common.sh@950 -- # wait 134628 00:25:48.910 [2024-07-11 16:41:25.562514] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:49.844 00:25:49.844 real 0m25.401s 00:25:49.844 user 0m37.489s 00:25:49.844 sys 0m2.476s 00:25:49.844 16:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:49.844 16:41:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.844 ************************************ 00:25:49.844 END TEST raid5f_rebuild_test 00:25:49.844 ************************************ 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:25:49.844 16:41:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:25:49.844 16:41:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:49.844 16:41:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.844 ************************************ 00:25:49.844 START TEST raid5f_rebuild_test_sb 00:25:49.844 ************************************ 00:25:49.844 16:41:26 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid5f 4 true false 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:49.844 16:41:26 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@544 -- # raid_pid=135285 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@545 -- # waitforlisten 135285 /var/tmp/spdk-raid.sock 00:25:49.845 16:41:26 -- common/autotest_common.sh@819 -- # '[' -z 135285 ']' 00:25:49.845 16:41:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:49.845 16:41:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:49.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:49.845 16:41:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:49.845 16:41:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:49.845 16:41:26 -- common/autotest_common.sh@10 -- # set +x 00:25:49.845 16:41:26 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:50.102 [2024-07-11 16:41:26.684557] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:50.102 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:50.102 Zero copy mechanism will not be used. 00:25:50.102 [2024-07-11 16:41:26.684791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135285 ] 00:25:50.102 [2024-07-11 16:41:26.857943] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.360 [2024-07-11 16:41:27.126242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.618 [2024-07-11 16:41:27.297138] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:50.877 16:41:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:50.877 16:41:27 -- common/autotest_common.sh@852 -- # return 0 00:25:50.877 16:41:27 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:50.877 16:41:27 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:50.877 16:41:27 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:51.135 BaseBdev1_malloc 00:25:51.135 16:41:27 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:51.394 [2024-07-11 16:41:28.041753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:51.394 [2024-07-11 16:41:28.041845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.394 [2024-07-11 16:41:28.041878] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:25:51.394 [2024-07-11 16:41:28.041920] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.394 [2024-07-11 16:41:28.044145] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.394 [2024-07-11 16:41:28.044185] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:51.394 BaseBdev1 00:25:51.394 16:41:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:51.394 16:41:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:51.394 16:41:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:51.652 BaseBdev2_malloc 00:25:51.652 16:41:28 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:51.911 [2024-07-11 16:41:28.464078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:51.911 [2024-07-11 16:41:28.464141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.911 [2024-07-11 16:41:28.464178] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:51.911 [2024-07-11 16:41:28.464224] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.911 [2024-07-11 16:41:28.466189] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.911 [2024-07-11 16:41:28.466228] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:51.911 BaseBdev2 00:25:51.911 16:41:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:51.911 16:41:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:51.911 16:41:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:51.911 BaseBdev3_malloc 00:25:51.911 16:41:28 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:52.169 [2024-07-11 16:41:28.861910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:52.169 [2024-07-11 16:41:28.861971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.169 [2024-07-11 16:41:28.862006] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:52.169 [2024-07-11 16:41:28.862044] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.169 [2024-07-11 16:41:28.863899] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.169 [2024-07-11 16:41:28.863941] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:52.169 BaseBdev3 00:25:52.169 16:41:28 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:52.169 16:41:28 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:52.169 16:41:28 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:52.427 BaseBdev4_malloc 00:25:52.427 16:41:29 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:52.686 [2024-07-11 16:41:29.250287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:52.686 [2024-07-11 16:41:29.250354] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:52.686 [2024-07-11 16:41:29.250384] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:52.686 [2024-07-11 16:41:29.250421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:52.686 [2024-07-11 16:41:29.252233] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:52.686 [2024-07-11 16:41:29.252274] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:52.686 BaseBdev4 00:25:52.686 16:41:29 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:52.686 spare_malloc 00:25:52.686 16:41:29 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:52.945 spare_delay 00:25:52.945 16:41:29 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:53.203 [2024-07-11 16:41:29.904018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:53.203 [2024-07-11 16:41:29.904079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:53.203 [2024-07-11 16:41:29.904106] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:53.203 [2024-07-11 16:41:29.904142] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:53.203 [2024-07-11 16:41:29.906088] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:53.203 [2024-07-11 16:41:29.906154] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:53.203 spare 00:25:53.204 16:41:29 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:53.462 [2024-07-11 16:41:30.080121] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:53.462 [2024-07-11 16:41:30.081706] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:53.462 [2024-07-11 16:41:30.081779] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:53.462 [2024-07-11 16:41:30.081831] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:53.462 [2024-07-11 16:41:30.082079] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:25:53.462 [2024-07-11 16:41:30.082103] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:53.462 [2024-07-11 16:41:30.082201] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:25:53.462 [2024-07-11 16:41:30.087705] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:25:53.462 [2024-07-11 16:41:30.087729] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:25:53.463 [2024-07-11 16:41:30.087919] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:53.463 16:41:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.721 16:41:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:53.721 "name": "raid_bdev1", 00:25:53.721 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:25:53.721 "strip_size_kb": 64, 00:25:53.721 "state": "online", 00:25:53.721 "raid_level": "raid5f", 00:25:53.721 "superblock": true, 00:25:53.721 "num_base_bdevs": 4, 00:25:53.721 "num_base_bdevs_discovered": 4, 00:25:53.721 "num_base_bdevs_operational": 4, 00:25:53.721 "base_bdevs_list": [ 00:25:53.721 { 00:25:53.721 "name": "BaseBdev1", 00:25:53.721 "uuid": "2c7bcf96-debd-50c0-8ff3-8fee6eca6738", 00:25:53.721 "is_configured": true, 00:25:53.721 "data_offset": 2048, 00:25:53.721 "data_size": 63488 00:25:53.721 }, 00:25:53.721 { 00:25:53.721 "name": "BaseBdev2", 00:25:53.721 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:25:53.721 "is_configured": true, 00:25:53.721 "data_offset": 2048, 00:25:53.721 "data_size": 63488 00:25:53.721 }, 00:25:53.721 { 00:25:53.721 "name": "BaseBdev3", 00:25:53.721 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:25:53.721 "is_configured": true, 00:25:53.721 "data_offset": 2048, 00:25:53.721 "data_size": 63488 00:25:53.721 }, 00:25:53.721 { 00:25:53.721 "name": "BaseBdev4", 00:25:53.721 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:25:53.721 "is_configured": true, 00:25:53.721 "data_offset": 2048, 00:25:53.721 "data_size": 63488 00:25:53.721 } 00:25:53.721 ] 00:25:53.721 }' 00:25:53.721 16:41:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:53.721 16:41:30 -- common/autotest_common.sh@10 -- # set +x 00:25:54.286 16:41:30 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:54.286 16:41:30 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:54.286 [2024-07-11 16:41:31.046584] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:54.286 16:41:31 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:25:54.286 16:41:31 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:54.286 16:41:31 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.544 16:41:31 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:54.544 16:41:31 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:54.544 16:41:31 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:54.544 16:41:31 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:54.544 16:41:31 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:54.544 16:41:31 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:54.544 16:41:31 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:54.544 16:41:31 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:54.544 16:41:31 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:54.544 16:41:31 -- bdev/nbd_common.sh@12 -- # local i 00:25:54.544 16:41:31 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:54.544 16:41:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:54.544 16:41:31 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:54.803 [2024-07-11 16:41:31.458556] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:54.803 /dev/nbd0 00:25:54.803 16:41:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:54.803 16:41:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:54.803 16:41:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:25:54.803 16:41:31 -- common/autotest_common.sh@857 -- # local i 00:25:54.803 16:41:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:25:54.803 16:41:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:25:54.803 16:41:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:25:54.803 16:41:31 -- common/autotest_common.sh@861 -- # break 00:25:54.803 16:41:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:25:54.803 16:41:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:25:54.803 16:41:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:54.803 1+0 records in 00:25:54.803 1+0 records out 00:25:54.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208882 s, 19.6 MB/s 00:25:54.803 16:41:31 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.803 16:41:31 -- common/autotest_common.sh@874 -- # size=4096 00:25:54.803 16:41:31 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:54.803 16:41:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:25:54.803 16:41:31 -- common/autotest_common.sh@877 -- # return 0 00:25:54.803 16:41:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:54.803 16:41:31 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:54.803 16:41:31 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:54.803 16:41:31 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:54.803 16:41:31 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:54.803 16:41:31 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:25:55.371 496+0 records in 00:25:55.371 496+0 records out 00:25:55.371 97517568 bytes (98 MB, 93 MiB) copied, 0.424034 s, 230 MB/s 00:25:55.371 16:41:31 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:55.371 16:41:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:55.371 16:41:31 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:55.371 16:41:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:55.371 16:41:31 -- bdev/nbd_common.sh@51 -- # local i 00:25:55.371 16:41:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:55.371 16:41:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:55.371 16:41:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:55.371 16:41:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:55.371 16:41:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:55.371 16:41:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:55.371 16:41:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.371 16:41:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:55.371 16:41:32 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:55.371 [2024-07-11 16:41:32.151336] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:55.630 16:41:32 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:55.630 16:41:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:55.630 16:41:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:55.630 16:41:32 -- bdev/nbd_common.sh@41 -- # break 00:25:55.630 16:41:32 -- bdev/nbd_common.sh@45 -- # return 0 00:25:55.630 16:41:32 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:55.888 [2024-07-11 16:41:32.475153] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:55.888 "name": "raid_bdev1", 00:25:55.888 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:25:55.888 "strip_size_kb": 64, 00:25:55.888 "state": "online", 00:25:55.888 "raid_level": "raid5f", 00:25:55.888 "superblock": true, 00:25:55.888 "num_base_bdevs": 4, 00:25:55.888 "num_base_bdevs_discovered": 3, 00:25:55.888 "num_base_bdevs_operational": 3, 00:25:55.888 "base_bdevs_list": [ 00:25:55.888 { 00:25:55.888 "name": null, 00:25:55.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.888 "is_configured": false, 00:25:55.888 "data_offset": 2048, 00:25:55.888 "data_size": 63488 00:25:55.888 }, 00:25:55.888 { 00:25:55.888 "name": "BaseBdev2", 00:25:55.888 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:25:55.888 "is_configured": true, 00:25:55.888 "data_offset": 2048, 00:25:55.888 "data_size": 63488 00:25:55.888 }, 00:25:55.888 { 00:25:55.888 "name": "BaseBdev3", 00:25:55.888 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:25:55.888 "is_configured": true, 00:25:55.888 "data_offset": 2048, 00:25:55.888 "data_size": 63488 00:25:55.888 }, 00:25:55.888 { 00:25:55.888 "name": "BaseBdev4", 00:25:55.888 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:25:55.888 "is_configured": true, 00:25:55.888 "data_offset": 2048, 00:25:55.888 "data_size": 63488 00:25:55.888 } 00:25:55.888 ] 00:25:55.888 }' 00:25:55.888 16:41:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:55.888 16:41:32 -- common/autotest_common.sh@10 -- # set +x 00:25:56.822 16:41:33 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:56.822 [2024-07-11 16:41:33.564503] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:56.822 [2024-07-11 16:41:33.564575] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:56.822 [2024-07-11 16:41:33.575789] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bd00 00:25:56.822 [2024-07-11 16:41:33.583182] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:56.822 16:41:33 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:58.196 "name": "raid_bdev1", 00:25:58.196 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:25:58.196 "strip_size_kb": 64, 00:25:58.196 "state": "online", 00:25:58.196 "raid_level": "raid5f", 00:25:58.196 "superblock": true, 00:25:58.196 "num_base_bdevs": 4, 00:25:58.196 "num_base_bdevs_discovered": 4, 00:25:58.196 "num_base_bdevs_operational": 4, 00:25:58.196 "process": { 00:25:58.196 "type": "rebuild", 00:25:58.196 "target": "spare", 00:25:58.196 "progress": { 00:25:58.196 "blocks": 23040, 00:25:58.196 "percent": 12 00:25:58.196 } 00:25:58.196 }, 00:25:58.196 "base_bdevs_list": [ 00:25:58.196 { 00:25:58.196 "name": "spare", 00:25:58.196 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:25:58.196 "is_configured": true, 00:25:58.196 "data_offset": 2048, 00:25:58.196 "data_size": 63488 00:25:58.196 }, 00:25:58.196 { 00:25:58.196 "name": "BaseBdev2", 00:25:58.196 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:25:58.196 "is_configured": true, 00:25:58.196 "data_offset": 2048, 00:25:58.196 "data_size": 63488 00:25:58.196 }, 00:25:58.196 { 00:25:58.196 "name": "BaseBdev3", 00:25:58.196 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:25:58.196 "is_configured": true, 00:25:58.196 "data_offset": 2048, 00:25:58.196 "data_size": 63488 00:25:58.196 }, 00:25:58.196 { 00:25:58.196 "name": "BaseBdev4", 00:25:58.196 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:25:58.196 "is_configured": true, 00:25:58.196 "data_offset": 2048, 00:25:58.196 "data_size": 63488 00:25:58.196 } 00:25:58.196 ] 00:25:58.196 }' 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:58.196 16:41:34 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:58.455 [2024-07-11 16:41:35.144447] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:58.455 [2024-07-11 16:41:35.194476] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:58.455 [2024-07-11 16:41:35.194559] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.455 16:41:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.714 16:41:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:58.714 "name": "raid_bdev1", 00:25:58.714 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:25:58.714 "strip_size_kb": 64, 00:25:58.714 "state": "online", 00:25:58.714 "raid_level": "raid5f", 00:25:58.714 "superblock": true, 00:25:58.714 "num_base_bdevs": 4, 00:25:58.714 "num_base_bdevs_discovered": 3, 00:25:58.714 "num_base_bdevs_operational": 3, 00:25:58.714 "base_bdevs_list": [ 00:25:58.714 { 00:25:58.714 "name": null, 00:25:58.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.714 "is_configured": false, 00:25:58.714 "data_offset": 2048, 00:25:58.714 "data_size": 63488 00:25:58.714 }, 00:25:58.714 { 00:25:58.714 "name": "BaseBdev2", 00:25:58.714 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:25:58.714 "is_configured": true, 00:25:58.714 "data_offset": 2048, 00:25:58.714 "data_size": 63488 00:25:58.714 }, 00:25:58.714 { 00:25:58.714 "name": "BaseBdev3", 00:25:58.714 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:25:58.714 "is_configured": true, 00:25:58.714 "data_offset": 2048, 00:25:58.714 "data_size": 63488 00:25:58.714 }, 00:25:58.714 { 00:25:58.714 "name": "BaseBdev4", 00:25:58.714 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:25:58.714 "is_configured": true, 00:25:58.714 "data_offset": 2048, 00:25:58.714 "data_size": 63488 00:25:58.714 } 00:25:58.714 ] 00:25:58.714 }' 00:25:58.714 16:41:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:58.714 16:41:35 -- common/autotest_common.sh@10 -- # set +x 00:25:59.651 16:41:36 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:59.651 16:41:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:59.651 16:41:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:59.651 16:41:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:59.651 16:41:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:59.651 16:41:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.651 16:41:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.651 16:41:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:59.651 "name": "raid_bdev1", 00:25:59.651 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:25:59.651 "strip_size_kb": 64, 00:25:59.651 "state": "online", 00:25:59.651 "raid_level": "raid5f", 00:25:59.651 "superblock": true, 00:25:59.651 "num_base_bdevs": 4, 00:25:59.651 "num_base_bdevs_discovered": 3, 00:25:59.651 "num_base_bdevs_operational": 3, 00:25:59.651 "base_bdevs_list": [ 00:25:59.651 { 00:25:59.651 "name": null, 00:25:59.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.651 "is_configured": false, 00:25:59.651 "data_offset": 2048, 00:25:59.651 "data_size": 63488 00:25:59.651 }, 00:25:59.651 { 00:25:59.651 "name": "BaseBdev2", 00:25:59.651 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:25:59.651 "is_configured": true, 00:25:59.651 "data_offset": 2048, 00:25:59.651 "data_size": 63488 00:25:59.651 }, 00:25:59.651 { 00:25:59.651 "name": "BaseBdev3", 00:25:59.651 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:25:59.651 "is_configured": true, 00:25:59.651 "data_offset": 2048, 00:25:59.651 "data_size": 63488 00:25:59.651 }, 00:25:59.651 { 00:25:59.651 "name": "BaseBdev4", 00:25:59.651 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:25:59.651 "is_configured": true, 00:25:59.651 "data_offset": 2048, 00:25:59.651 "data_size": 63488 00:25:59.651 } 00:25:59.651 ] 00:25:59.651 }' 00:25:59.651 16:41:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:59.651 16:41:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:59.651 16:41:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:59.910 16:41:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:59.910 16:41:36 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:00.168 [2024-07-11 16:41:36.744321] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:26:00.168 [2024-07-11 16:41:36.744381] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:00.168 [2024-07-11 16:41:36.755396] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bea0 00:26:00.168 [2024-07-11 16:41:36.762897] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:00.168 16:41:36 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:26:01.126 16:41:37 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:01.126 16:41:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:01.126 16:41:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:01.126 16:41:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:01.126 16:41:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:01.126 16:41:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.126 16:41:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.384 16:41:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:01.384 "name": "raid_bdev1", 00:26:01.384 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:01.384 "strip_size_kb": 64, 00:26:01.384 "state": "online", 00:26:01.384 "raid_level": "raid5f", 00:26:01.384 "superblock": true, 00:26:01.384 "num_base_bdevs": 4, 00:26:01.384 "num_base_bdevs_discovered": 4, 00:26:01.384 "num_base_bdevs_operational": 4, 00:26:01.384 "process": { 00:26:01.384 "type": "rebuild", 00:26:01.384 "target": "spare", 00:26:01.384 "progress": { 00:26:01.384 "blocks": 23040, 00:26:01.384 "percent": 12 00:26:01.384 } 00:26:01.384 }, 00:26:01.384 "base_bdevs_list": [ 00:26:01.384 { 00:26:01.384 "name": "spare", 00:26:01.384 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:01.384 "is_configured": true, 00:26:01.384 "data_offset": 2048, 00:26:01.384 "data_size": 63488 00:26:01.384 }, 00:26:01.384 { 00:26:01.384 "name": "BaseBdev2", 00:26:01.384 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:01.384 "is_configured": true, 00:26:01.384 "data_offset": 2048, 00:26:01.384 "data_size": 63488 00:26:01.384 }, 00:26:01.384 { 00:26:01.384 "name": "BaseBdev3", 00:26:01.384 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:01.384 "is_configured": true, 00:26:01.384 "data_offset": 2048, 00:26:01.384 "data_size": 63488 00:26:01.384 }, 00:26:01.384 { 00:26:01.384 "name": "BaseBdev4", 00:26:01.384 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:01.384 "is_configured": true, 00:26:01.384 "data_offset": 2048, 00:26:01.384 "data_size": 63488 00:26:01.384 } 00:26:01.384 ] 00:26:01.384 }' 00:26:01.384 16:41:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:26:01.384 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@657 -- # local timeout=714 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.384 16:41:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:01.643 16:41:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:01.643 "name": "raid_bdev1", 00:26:01.643 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:01.643 "strip_size_kb": 64, 00:26:01.643 "state": "online", 00:26:01.643 "raid_level": "raid5f", 00:26:01.643 "superblock": true, 00:26:01.643 "num_base_bdevs": 4, 00:26:01.643 "num_base_bdevs_discovered": 4, 00:26:01.643 "num_base_bdevs_operational": 4, 00:26:01.643 "process": { 00:26:01.643 "type": "rebuild", 00:26:01.643 "target": "spare", 00:26:01.643 "progress": { 00:26:01.643 "blocks": 30720, 00:26:01.643 "percent": 16 00:26:01.643 } 00:26:01.643 }, 00:26:01.643 "base_bdevs_list": [ 00:26:01.643 { 00:26:01.643 "name": "spare", 00:26:01.643 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:01.643 "is_configured": true, 00:26:01.643 "data_offset": 2048, 00:26:01.643 "data_size": 63488 00:26:01.643 }, 00:26:01.643 { 00:26:01.643 "name": "BaseBdev2", 00:26:01.643 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:01.643 "is_configured": true, 00:26:01.643 "data_offset": 2048, 00:26:01.643 "data_size": 63488 00:26:01.643 }, 00:26:01.643 { 00:26:01.643 "name": "BaseBdev3", 00:26:01.643 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:01.643 "is_configured": true, 00:26:01.643 "data_offset": 2048, 00:26:01.643 "data_size": 63488 00:26:01.643 }, 00:26:01.643 { 00:26:01.643 "name": "BaseBdev4", 00:26:01.643 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:01.643 "is_configured": true, 00:26:01.643 "data_offset": 2048, 00:26:01.643 "data_size": 63488 00:26:01.643 } 00:26:01.643 ] 00:26:01.643 }' 00:26:01.643 16:41:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:01.901 16:41:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:01.901 16:41:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:01.901 16:41:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:01.901 16:41:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:02.834 16:41:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:02.834 16:41:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:02.834 16:41:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:02.834 16:41:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:02.834 16:41:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:02.834 16:41:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:02.834 16:41:39 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.834 16:41:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:03.093 16:41:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:03.093 "name": "raid_bdev1", 00:26:03.093 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:03.093 "strip_size_kb": 64, 00:26:03.093 "state": "online", 00:26:03.093 "raid_level": "raid5f", 00:26:03.093 "superblock": true, 00:26:03.093 "num_base_bdevs": 4, 00:26:03.093 "num_base_bdevs_discovered": 4, 00:26:03.093 "num_base_bdevs_operational": 4, 00:26:03.093 "process": { 00:26:03.093 "type": "rebuild", 00:26:03.093 "target": "spare", 00:26:03.093 "progress": { 00:26:03.093 "blocks": 55680, 00:26:03.093 "percent": 29 00:26:03.093 } 00:26:03.093 }, 00:26:03.093 "base_bdevs_list": [ 00:26:03.093 { 00:26:03.093 "name": "spare", 00:26:03.093 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:03.093 "is_configured": true, 00:26:03.093 "data_offset": 2048, 00:26:03.093 "data_size": 63488 00:26:03.093 }, 00:26:03.093 { 00:26:03.093 "name": "BaseBdev2", 00:26:03.093 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:03.093 "is_configured": true, 00:26:03.093 "data_offset": 2048, 00:26:03.093 "data_size": 63488 00:26:03.093 }, 00:26:03.093 { 00:26:03.093 "name": "BaseBdev3", 00:26:03.093 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:03.093 "is_configured": true, 00:26:03.093 "data_offset": 2048, 00:26:03.093 "data_size": 63488 00:26:03.093 }, 00:26:03.093 { 00:26:03.093 "name": "BaseBdev4", 00:26:03.093 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:03.093 "is_configured": true, 00:26:03.093 "data_offset": 2048, 00:26:03.093 "data_size": 63488 00:26:03.093 } 00:26:03.093 ] 00:26:03.093 }' 00:26:03.093 16:41:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:03.093 16:41:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:03.093 16:41:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:03.093 16:41:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:03.093 16:41:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:04.466 16:41:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:04.466 16:41:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:04.466 16:41:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:04.466 16:41:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:04.466 16:41:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:04.466 16:41:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:04.466 16:41:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:04.466 16:41:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:04.466 16:41:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:04.466 "name": "raid_bdev1", 00:26:04.466 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:04.466 "strip_size_kb": 64, 00:26:04.466 "state": "online", 00:26:04.466 "raid_level": "raid5f", 00:26:04.466 "superblock": true, 00:26:04.466 "num_base_bdevs": 4, 00:26:04.466 "num_base_bdevs_discovered": 4, 00:26:04.466 "num_base_bdevs_operational": 4, 00:26:04.466 "process": { 00:26:04.466 "type": "rebuild", 00:26:04.466 "target": "spare", 00:26:04.466 "progress": { 00:26:04.466 "blocks": 82560, 00:26:04.466 "percent": 43 00:26:04.466 } 00:26:04.466 }, 00:26:04.466 "base_bdevs_list": [ 00:26:04.466 { 00:26:04.466 "name": "spare", 00:26:04.466 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:04.466 "is_configured": true, 00:26:04.466 "data_offset": 2048, 00:26:04.466 "data_size": 63488 00:26:04.466 }, 00:26:04.466 { 00:26:04.466 "name": "BaseBdev2", 00:26:04.466 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:04.466 "is_configured": true, 00:26:04.466 "data_offset": 2048, 00:26:04.466 "data_size": 63488 00:26:04.466 }, 00:26:04.466 { 00:26:04.466 "name": "BaseBdev3", 00:26:04.466 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:04.466 "is_configured": true, 00:26:04.466 "data_offset": 2048, 00:26:04.466 "data_size": 63488 00:26:04.466 }, 00:26:04.466 { 00:26:04.466 "name": "BaseBdev4", 00:26:04.466 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:04.466 "is_configured": true, 00:26:04.466 "data_offset": 2048, 00:26:04.466 "data_size": 63488 00:26:04.466 } 00:26:04.466 ] 00:26:04.466 }' 00:26:04.466 16:41:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:04.466 16:41:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:04.466 16:41:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:04.724 16:41:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:04.724 16:41:41 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:05.659 16:41:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:05.659 16:41:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:05.659 16:41:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:05.659 16:41:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:05.659 16:41:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:05.659 16:41:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:05.659 16:41:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.659 16:41:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.917 16:41:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:05.917 "name": "raid_bdev1", 00:26:05.917 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:05.917 "strip_size_kb": 64, 00:26:05.917 "state": "online", 00:26:05.917 "raid_level": "raid5f", 00:26:05.917 "superblock": true, 00:26:05.917 "num_base_bdevs": 4, 00:26:05.917 "num_base_bdevs_discovered": 4, 00:26:05.917 "num_base_bdevs_operational": 4, 00:26:05.917 "process": { 00:26:05.917 "type": "rebuild", 00:26:05.917 "target": "spare", 00:26:05.917 "progress": { 00:26:05.917 "blocks": 107520, 00:26:05.917 "percent": 56 00:26:05.917 } 00:26:05.917 }, 00:26:05.917 "base_bdevs_list": [ 00:26:05.917 { 00:26:05.917 "name": "spare", 00:26:05.917 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:05.917 "is_configured": true, 00:26:05.917 "data_offset": 2048, 00:26:05.917 "data_size": 63488 00:26:05.917 }, 00:26:05.917 { 00:26:05.917 "name": "BaseBdev2", 00:26:05.917 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:05.917 "is_configured": true, 00:26:05.917 "data_offset": 2048, 00:26:05.917 "data_size": 63488 00:26:05.917 }, 00:26:05.917 { 00:26:05.917 "name": "BaseBdev3", 00:26:05.917 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:05.917 "is_configured": true, 00:26:05.917 "data_offset": 2048, 00:26:05.917 "data_size": 63488 00:26:05.917 }, 00:26:05.917 { 00:26:05.917 "name": "BaseBdev4", 00:26:05.917 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:05.917 "is_configured": true, 00:26:05.917 "data_offset": 2048, 00:26:05.917 "data_size": 63488 00:26:05.917 } 00:26:05.917 ] 00:26:05.917 }' 00:26:05.917 16:41:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:05.917 16:41:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:05.917 16:41:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:05.917 16:41:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:05.917 16:41:42 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:06.850 16:41:43 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:06.850 16:41:43 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:06.850 16:41:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:06.850 16:41:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:06.850 16:41:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:06.850 16:41:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:06.850 16:41:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.850 16:41:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.108 16:41:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:07.108 "name": "raid_bdev1", 00:26:07.108 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:07.108 "strip_size_kb": 64, 00:26:07.108 "state": "online", 00:26:07.108 "raid_level": "raid5f", 00:26:07.108 "superblock": true, 00:26:07.108 "num_base_bdevs": 4, 00:26:07.108 "num_base_bdevs_discovered": 4, 00:26:07.108 "num_base_bdevs_operational": 4, 00:26:07.108 "process": { 00:26:07.108 "type": "rebuild", 00:26:07.108 "target": "spare", 00:26:07.108 "progress": { 00:26:07.108 "blocks": 132480, 00:26:07.108 "percent": 69 00:26:07.108 } 00:26:07.108 }, 00:26:07.108 "base_bdevs_list": [ 00:26:07.108 { 00:26:07.108 "name": "spare", 00:26:07.108 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:07.108 "is_configured": true, 00:26:07.108 "data_offset": 2048, 00:26:07.108 "data_size": 63488 00:26:07.108 }, 00:26:07.108 { 00:26:07.108 "name": "BaseBdev2", 00:26:07.108 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:07.108 "is_configured": true, 00:26:07.108 "data_offset": 2048, 00:26:07.108 "data_size": 63488 00:26:07.108 }, 00:26:07.108 { 00:26:07.108 "name": "BaseBdev3", 00:26:07.108 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:07.108 "is_configured": true, 00:26:07.108 "data_offset": 2048, 00:26:07.108 "data_size": 63488 00:26:07.108 }, 00:26:07.108 { 00:26:07.108 "name": "BaseBdev4", 00:26:07.108 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:07.108 "is_configured": true, 00:26:07.108 "data_offset": 2048, 00:26:07.108 "data_size": 63488 00:26:07.108 } 00:26:07.108 ] 00:26:07.108 }' 00:26:07.108 16:41:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:07.108 16:41:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:07.108 16:41:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:07.366 16:41:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:07.366 16:41:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:08.302 16:41:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:08.302 16:41:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:08.302 16:41:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:08.302 16:41:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:08.302 16:41:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:08.302 16:41:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:08.302 16:41:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.302 16:41:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:08.560 16:41:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:08.560 "name": "raid_bdev1", 00:26:08.560 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:08.560 "strip_size_kb": 64, 00:26:08.560 "state": "online", 00:26:08.560 "raid_level": "raid5f", 00:26:08.560 "superblock": true, 00:26:08.560 "num_base_bdevs": 4, 00:26:08.560 "num_base_bdevs_discovered": 4, 00:26:08.560 "num_base_bdevs_operational": 4, 00:26:08.560 "process": { 00:26:08.560 "type": "rebuild", 00:26:08.560 "target": "spare", 00:26:08.560 "progress": { 00:26:08.560 "blocks": 159360, 00:26:08.560 "percent": 83 00:26:08.560 } 00:26:08.561 }, 00:26:08.561 "base_bdevs_list": [ 00:26:08.561 { 00:26:08.561 "name": "spare", 00:26:08.561 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:08.561 "is_configured": true, 00:26:08.561 "data_offset": 2048, 00:26:08.561 "data_size": 63488 00:26:08.561 }, 00:26:08.561 { 00:26:08.561 "name": "BaseBdev2", 00:26:08.561 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:08.561 "is_configured": true, 00:26:08.561 "data_offset": 2048, 00:26:08.561 "data_size": 63488 00:26:08.561 }, 00:26:08.561 { 00:26:08.561 "name": "BaseBdev3", 00:26:08.561 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:08.561 "is_configured": true, 00:26:08.561 "data_offset": 2048, 00:26:08.561 "data_size": 63488 00:26:08.561 }, 00:26:08.561 { 00:26:08.561 "name": "BaseBdev4", 00:26:08.561 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:08.561 "is_configured": true, 00:26:08.561 "data_offset": 2048, 00:26:08.561 "data_size": 63488 00:26:08.561 } 00:26:08.561 ] 00:26:08.561 }' 00:26:08.561 16:41:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:08.561 16:41:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:08.561 16:41:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:08.561 16:41:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:08.561 16:41:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:09.497 16:41:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:09.497 16:41:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:09.497 16:41:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:09.497 16:41:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:09.497 16:41:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:09.497 16:41:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:09.497 16:41:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.497 16:41:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:09.756 16:41:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:09.756 "name": "raid_bdev1", 00:26:09.756 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:09.756 "strip_size_kb": 64, 00:26:09.756 "state": "online", 00:26:09.756 "raid_level": "raid5f", 00:26:09.756 "superblock": true, 00:26:09.756 "num_base_bdevs": 4, 00:26:09.756 "num_base_bdevs_discovered": 4, 00:26:09.756 "num_base_bdevs_operational": 4, 00:26:09.756 "process": { 00:26:09.756 "type": "rebuild", 00:26:09.756 "target": "spare", 00:26:09.756 "progress": { 00:26:09.756 "blocks": 184320, 00:26:09.756 "percent": 96 00:26:09.756 } 00:26:09.756 }, 00:26:09.756 "base_bdevs_list": [ 00:26:09.756 { 00:26:09.756 "name": "spare", 00:26:09.756 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:09.756 "is_configured": true, 00:26:09.756 "data_offset": 2048, 00:26:09.756 "data_size": 63488 00:26:09.756 }, 00:26:09.756 { 00:26:09.756 "name": "BaseBdev2", 00:26:09.756 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:09.756 "is_configured": true, 00:26:09.756 "data_offset": 2048, 00:26:09.756 "data_size": 63488 00:26:09.756 }, 00:26:09.756 { 00:26:09.756 "name": "BaseBdev3", 00:26:09.756 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:09.756 "is_configured": true, 00:26:09.756 "data_offset": 2048, 00:26:09.756 "data_size": 63488 00:26:09.756 }, 00:26:09.756 { 00:26:09.756 "name": "BaseBdev4", 00:26:09.756 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:09.756 "is_configured": true, 00:26:09.756 "data_offset": 2048, 00:26:09.756 "data_size": 63488 00:26:09.756 } 00:26:09.756 ] 00:26:09.756 }' 00:26:09.756 16:41:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:10.015 16:41:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:10.015 16:41:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:10.015 16:41:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:26:10.015 16:41:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:10.273 [2024-07-11 16:41:46.838118] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:10.273 [2024-07-11 16:41:46.838217] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:10.273 [2024-07-11 16:41:46.838431] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:11.207 "name": "raid_bdev1", 00:26:11.207 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:11.207 "strip_size_kb": 64, 00:26:11.207 "state": "online", 00:26:11.207 "raid_level": "raid5f", 00:26:11.207 "superblock": true, 00:26:11.207 "num_base_bdevs": 4, 00:26:11.207 "num_base_bdevs_discovered": 4, 00:26:11.207 "num_base_bdevs_operational": 4, 00:26:11.207 "base_bdevs_list": [ 00:26:11.207 { 00:26:11.207 "name": "spare", 00:26:11.207 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:11.207 "is_configured": true, 00:26:11.207 "data_offset": 2048, 00:26:11.207 "data_size": 63488 00:26:11.207 }, 00:26:11.207 { 00:26:11.207 "name": "BaseBdev2", 00:26:11.207 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:11.207 "is_configured": true, 00:26:11.207 "data_offset": 2048, 00:26:11.207 "data_size": 63488 00:26:11.207 }, 00:26:11.207 { 00:26:11.207 "name": "BaseBdev3", 00:26:11.207 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:11.207 "is_configured": true, 00:26:11.207 "data_offset": 2048, 00:26:11.207 "data_size": 63488 00:26:11.207 }, 00:26:11.207 { 00:26:11.207 "name": "BaseBdev4", 00:26:11.207 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:11.207 "is_configured": true, 00:26:11.207 "data_offset": 2048, 00:26:11.207 "data_size": 63488 00:26:11.207 } 00:26:11.207 ] 00:26:11.207 }' 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@660 -- # break 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.207 16:41:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.466 16:41:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:11.466 "name": "raid_bdev1", 00:26:11.466 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:11.466 "strip_size_kb": 64, 00:26:11.466 "state": "online", 00:26:11.466 "raid_level": "raid5f", 00:26:11.466 "superblock": true, 00:26:11.466 "num_base_bdevs": 4, 00:26:11.466 "num_base_bdevs_discovered": 4, 00:26:11.466 "num_base_bdevs_operational": 4, 00:26:11.466 "base_bdevs_list": [ 00:26:11.466 { 00:26:11.466 "name": "spare", 00:26:11.466 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:11.466 "is_configured": true, 00:26:11.466 "data_offset": 2048, 00:26:11.466 "data_size": 63488 00:26:11.466 }, 00:26:11.466 { 00:26:11.466 "name": "BaseBdev2", 00:26:11.466 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:11.466 "is_configured": true, 00:26:11.466 "data_offset": 2048, 00:26:11.466 "data_size": 63488 00:26:11.466 }, 00:26:11.466 { 00:26:11.466 "name": "BaseBdev3", 00:26:11.466 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:11.466 "is_configured": true, 00:26:11.466 "data_offset": 2048, 00:26:11.466 "data_size": 63488 00:26:11.466 }, 00:26:11.466 { 00:26:11.466 "name": "BaseBdev4", 00:26:11.466 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:11.466 "is_configured": true, 00:26:11.466 "data_offset": 2048, 00:26:11.466 "data_size": 63488 00:26:11.466 } 00:26:11.466 ] 00:26:11.466 }' 00:26:11.466 16:41:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:11.466 16:41:48 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:11.466 16:41:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:11.723 16:41:48 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:11.723 16:41:48 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:11.723 16:41:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:11.724 "name": "raid_bdev1", 00:26:11.724 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:11.724 "strip_size_kb": 64, 00:26:11.724 "state": "online", 00:26:11.724 "raid_level": "raid5f", 00:26:11.724 "superblock": true, 00:26:11.724 "num_base_bdevs": 4, 00:26:11.724 "num_base_bdevs_discovered": 4, 00:26:11.724 "num_base_bdevs_operational": 4, 00:26:11.724 "base_bdevs_list": [ 00:26:11.724 { 00:26:11.724 "name": "spare", 00:26:11.724 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:11.724 "is_configured": true, 00:26:11.724 "data_offset": 2048, 00:26:11.724 "data_size": 63488 00:26:11.724 }, 00:26:11.724 { 00:26:11.724 "name": "BaseBdev2", 00:26:11.724 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:11.724 "is_configured": true, 00:26:11.724 "data_offset": 2048, 00:26:11.724 "data_size": 63488 00:26:11.724 }, 00:26:11.724 { 00:26:11.724 "name": "BaseBdev3", 00:26:11.724 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:11.724 "is_configured": true, 00:26:11.724 "data_offset": 2048, 00:26:11.724 "data_size": 63488 00:26:11.724 }, 00:26:11.724 { 00:26:11.724 "name": "BaseBdev4", 00:26:11.724 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:11.724 "is_configured": true, 00:26:11.724 "data_offset": 2048, 00:26:11.724 "data_size": 63488 00:26:11.724 } 00:26:11.724 ] 00:26:11.724 }' 00:26:11.724 16:41:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:11.724 16:41:48 -- common/autotest_common.sh@10 -- # set +x 00:26:12.656 16:41:49 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:12.656 [2024-07-11 16:41:49.456907] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:12.656 [2024-07-11 16:41:49.456950] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:12.656 [2024-07-11 16:41:49.457042] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:12.656 [2024-07-11 16:41:49.457166] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:12.656 [2024-07-11 16:41:49.457194] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:26:12.914 16:41:49 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.914 16:41:49 -- bdev/bdev_raid.sh@671 -- # jq length 00:26:12.914 16:41:49 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:26:12.914 16:41:49 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:26:12.914 16:41:49 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:12.914 16:41:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:12.914 16:41:49 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:26:12.914 16:41:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:12.914 16:41:49 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:26:12.914 16:41:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:12.914 16:41:49 -- bdev/nbd_common.sh@12 -- # local i 00:26:12.914 16:41:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:12.914 16:41:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:12.914 16:41:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:13.172 /dev/nbd0 00:26:13.172 16:41:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:13.172 16:41:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:13.172 16:41:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:26:13.172 16:41:49 -- common/autotest_common.sh@857 -- # local i 00:26:13.172 16:41:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:13.172 16:41:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:13.172 16:41:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:26:13.172 16:41:49 -- common/autotest_common.sh@861 -- # break 00:26:13.172 16:41:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:13.172 16:41:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:13.172 16:41:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:13.172 1+0 records in 00:26:13.172 1+0 records out 00:26:13.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386587 s, 10.6 MB/s 00:26:13.172 16:41:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:13.172 16:41:49 -- common/autotest_common.sh@874 -- # size=4096 00:26:13.172 16:41:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:13.172 16:41:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:13.172 16:41:49 -- common/autotest_common.sh@877 -- # return 0 00:26:13.172 16:41:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:13.172 16:41:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:13.172 16:41:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:13.430 /dev/nbd1 00:26:13.430 16:41:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:13.430 16:41:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:13.430 16:41:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:26:13.430 16:41:50 -- common/autotest_common.sh@857 -- # local i 00:26:13.430 16:41:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:26:13.430 16:41:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:26:13.430 16:41:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:26:13.430 16:41:50 -- common/autotest_common.sh@861 -- # break 00:26:13.430 16:41:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:26:13.430 16:41:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:26:13.430 16:41:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:13.430 1+0 records in 00:26:13.430 1+0 records out 00:26:13.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292732 s, 14.0 MB/s 00:26:13.430 16:41:50 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:13.430 16:41:50 -- common/autotest_common.sh@874 -- # size=4096 00:26:13.430 16:41:50 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:13.430 16:41:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:26:13.430 16:41:50 -- common/autotest_common.sh@877 -- # return 0 00:26:13.430 16:41:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:13.430 16:41:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:13.430 16:41:50 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:26:13.687 16:41:50 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:13.687 16:41:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:13.687 16:41:50 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:26:13.687 16:41:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:13.687 16:41:50 -- bdev/nbd_common.sh@51 -- # local i 00:26:13.687 16:41:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:13.687 16:41:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@41 -- # break 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@45 -- # return 0 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:13.954 16:41:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:14.237 16:41:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:14.237 16:41:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:14.237 16:41:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:14.237 16:41:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:14.237 16:41:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:14.237 16:41:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:14.237 16:41:50 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:26:14.237 16:41:51 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:26:14.237 16:41:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:14.237 16:41:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:14.237 16:41:51 -- bdev/nbd_common.sh@41 -- # break 00:26:14.237 16:41:51 -- bdev/nbd_common.sh@45 -- # return 0 00:26:14.237 16:41:51 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:26:14.237 16:41:51 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:14.237 16:41:51 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:26:14.237 16:41:51 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:26:14.510 16:41:51 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:14.768 [2024-07-11 16:41:51.428166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:14.768 [2024-07-11 16:41:51.428260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:14.768 [2024-07-11 16:41:51.428300] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:14.768 [2024-07-11 16:41:51.428320] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:14.768 [2024-07-11 16:41:51.430530] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:14.768 [2024-07-11 16:41:51.430588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:14.768 [2024-07-11 16:41:51.430763] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:26:14.768 [2024-07-11 16:41:51.430820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:14.768 BaseBdev1 00:26:14.768 16:41:51 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:14.768 16:41:51 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:26:14.768 16:41:51 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:26:15.026 16:41:51 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:15.283 [2024-07-11 16:41:51.852200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:15.283 [2024-07-11 16:41:51.852279] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.283 [2024-07-11 16:41:51.852317] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:26:15.283 [2024-07-11 16:41:51.852335] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.283 [2024-07-11 16:41:51.852779] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.283 [2024-07-11 16:41:51.852856] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:15.283 [2024-07-11 16:41:51.852984] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:26:15.283 [2024-07-11 16:41:51.853000] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:26:15.283 [2024-07-11 16:41:51.853007] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:15.283 [2024-07-11 16:41:51.853025] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:26:15.283 [2024-07-11 16:41:51.853097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:15.283 BaseBdev2 00:26:15.283 16:41:51 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:15.283 16:41:51 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:26:15.283 16:41:51 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:26:15.283 16:41:52 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:26:15.541 [2024-07-11 16:41:52.268276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:26:15.541 [2024-07-11 16:41:52.268348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:15.541 [2024-07-11 16:41:52.268375] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:26:15.541 [2024-07-11 16:41:52.268397] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:15.541 [2024-07-11 16:41:52.268830] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:15.541 [2024-07-11 16:41:52.268888] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:15.541 [2024-07-11 16:41:52.268985] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:26:15.541 [2024-07-11 16:41:52.269011] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:15.541 BaseBdev3 00:26:15.541 16:41:52 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:26:15.541 16:41:52 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:26:15.541 16:41:52 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:26:15.799 16:41:52 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:26:16.057 [2024-07-11 16:41:52.644400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:26:16.057 [2024-07-11 16:41:52.644472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.057 [2024-07-11 16:41:52.644504] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:26:16.057 [2024-07-11 16:41:52.644529] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.057 [2024-07-11 16:41:52.645074] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.057 [2024-07-11 16:41:52.645138] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:16.057 [2024-07-11 16:41:52.645293] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:26:16.057 [2024-07-11 16:41:52.645322] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:16.057 BaseBdev4 00:26:16.057 16:41:52 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:26:16.057 16:41:52 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:16.314 [2024-07-11 16:41:53.008430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:16.314 [2024-07-11 16:41:53.008507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:16.314 [2024-07-11 16:41:53.008536] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:26:16.314 [2024-07-11 16:41:53.008563] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:16.314 [2024-07-11 16:41:53.009142] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:16.314 [2024-07-11 16:41:53.009207] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:16.314 [2024-07-11 16:41:53.009373] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:26:16.314 [2024-07-11 16:41:53.009402] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:16.314 spare 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.314 16:41:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.314 [2024-07-11 16:41:53.109510] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:26:16.314 [2024-07-11 16:41:53.109531] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:26:16.314 [2024-07-11 16:41:53.109654] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004cc50 00:26:16.314 [2024-07-11 16:41:53.114773] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:26:16.314 [2024-07-11 16:41:53.114799] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:26:16.314 [2024-07-11 16:41:53.115005] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:16.573 16:41:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:26:16.573 "name": "raid_bdev1", 00:26:16.573 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:16.573 "strip_size_kb": 64, 00:26:16.573 "state": "online", 00:26:16.573 "raid_level": "raid5f", 00:26:16.573 "superblock": true, 00:26:16.573 "num_base_bdevs": 4, 00:26:16.573 "num_base_bdevs_discovered": 4, 00:26:16.573 "num_base_bdevs_operational": 4, 00:26:16.573 "base_bdevs_list": [ 00:26:16.573 { 00:26:16.573 "name": "spare", 00:26:16.573 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:16.573 "is_configured": true, 00:26:16.573 "data_offset": 2048, 00:26:16.573 "data_size": 63488 00:26:16.573 }, 00:26:16.573 { 00:26:16.573 "name": "BaseBdev2", 00:26:16.573 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:16.573 "is_configured": true, 00:26:16.573 "data_offset": 2048, 00:26:16.573 "data_size": 63488 00:26:16.573 }, 00:26:16.573 { 00:26:16.573 "name": "BaseBdev3", 00:26:16.573 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:16.573 "is_configured": true, 00:26:16.573 "data_offset": 2048, 00:26:16.573 "data_size": 63488 00:26:16.573 }, 00:26:16.573 { 00:26:16.573 "name": "BaseBdev4", 00:26:16.573 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:16.573 "is_configured": true, 00:26:16.573 "data_offset": 2048, 00:26:16.573 "data_size": 63488 00:26:16.573 } 00:26:16.573 ] 00:26:16.573 }' 00:26:16.573 16:41:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:26:16.573 16:41:53 -- common/autotest_common.sh@10 -- # set +x 00:26:17.137 16:41:53 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:17.137 16:41:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:26:17.137 16:41:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:26:17.137 16:41:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:26:17.137 16:41:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:26:17.137 16:41:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.137 16:41:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:17.394 16:41:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:26:17.394 "name": "raid_bdev1", 00:26:17.394 "uuid": "15c31ecb-c335-453c-a913-379296464c62", 00:26:17.394 "strip_size_kb": 64, 00:26:17.394 "state": "online", 00:26:17.394 "raid_level": "raid5f", 00:26:17.394 "superblock": true, 00:26:17.394 "num_base_bdevs": 4, 00:26:17.394 "num_base_bdevs_discovered": 4, 00:26:17.394 "num_base_bdevs_operational": 4, 00:26:17.394 "base_bdevs_list": [ 00:26:17.394 { 00:26:17.394 "name": "spare", 00:26:17.394 "uuid": "f9f6485e-0329-5ec0-8c77-f2854bc3cae7", 00:26:17.394 "is_configured": true, 00:26:17.394 "data_offset": 2048, 00:26:17.394 "data_size": 63488 00:26:17.394 }, 00:26:17.394 { 00:26:17.394 "name": "BaseBdev2", 00:26:17.394 "uuid": "30d930bb-f9d2-55ca-8425-81bd51f37f5c", 00:26:17.394 "is_configured": true, 00:26:17.394 "data_offset": 2048, 00:26:17.394 "data_size": 63488 00:26:17.394 }, 00:26:17.394 { 00:26:17.394 "name": "BaseBdev3", 00:26:17.394 "uuid": "83acf65e-5232-50fb-9bee-b1265a87073b", 00:26:17.394 "is_configured": true, 00:26:17.394 "data_offset": 2048, 00:26:17.394 "data_size": 63488 00:26:17.394 }, 00:26:17.394 { 00:26:17.394 "name": "BaseBdev4", 00:26:17.394 "uuid": "b21814a9-b00f-5ad1-9cc6-52ad8c2c8e8b", 00:26:17.394 "is_configured": true, 00:26:17.394 "data_offset": 2048, 00:26:17.394 "data_size": 63488 00:26:17.394 } 00:26:17.394 ] 00:26:17.394 }' 00:26:17.394 16:41:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:26:17.394 16:41:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:17.394 16:41:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:26:17.394 16:41:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:26:17.394 16:41:54 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.394 16:41:54 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:26:17.652 16:41:54 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:26:17.652 16:41:54 -- bdev/bdev_raid.sh@709 -- # killprocess 135285 00:26:17.652 16:41:54 -- common/autotest_common.sh@926 -- # '[' -z 135285 ']' 00:26:17.652 16:41:54 -- common/autotest_common.sh@930 -- # kill -0 135285 00:26:17.652 16:41:54 -- common/autotest_common.sh@931 -- # uname 00:26:17.652 16:41:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:17.652 16:41:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 135285 00:26:17.652 killing process with pid 135285 00:26:17.652 Received shutdown signal, test time was about 60.000000 seconds 00:26:17.652 00:26:17.652 Latency(us) 00:26:17.652 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.652 =================================================================================================================== 00:26:17.652 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:17.652 16:41:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:17.652 16:41:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:17.652 16:41:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 135285' 00:26:17.652 16:41:54 -- common/autotest_common.sh@945 -- # kill 135285 00:26:17.652 16:41:54 -- common/autotest_common.sh@950 -- # wait 135285 00:26:17.652 [2024-07-11 16:41:54.421252] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:17.652 [2024-07-11 16:41:54.421404] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.652 [2024-07-11 16:41:54.421513] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:17.652 [2024-07-11 16:41:54.421535] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:26:18.218 [2024-07-11 16:41:54.733374] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:19.152 ************************************ 00:26:19.152 END TEST raid5f_rebuild_test_sb 00:26:19.152 ************************************ 00:26:19.152 16:41:55 -- bdev/bdev_raid.sh@711 -- # return 0 00:26:19.152 00:26:19.152 real 0m29.034s 00:26:19.152 user 0m44.687s 00:26:19.152 sys 0m2.712s 00:26:19.152 16:41:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.152 16:41:55 -- common/autotest_common.sh@10 -- # set +x 00:26:19.152 16:41:55 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:26:19.152 00:26:19.152 real 11m41.220s 00:26:19.152 user 19m34.648s 00:26:19.152 sys 1m19.108s 00:26:19.152 ************************************ 00:26:19.152 END TEST bdev_raid 00:26:19.152 ************************************ 00:26:19.152 16:41:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.152 16:41:55 -- common/autotest_common.sh@10 -- # set +x 00:26:19.152 16:41:55 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:19.152 16:41:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:19.152 16:41:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.152 16:41:55 -- common/autotest_common.sh@10 -- # set +x 00:26:19.152 ************************************ 00:26:19.152 START TEST bdevperf_config 00:26:19.152 ************************************ 00:26:19.152 16:41:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:26:19.152 * Looking for test storage... 00:26:19.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:26:19.152 16:41:55 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:26:19.152 16:41:55 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:26:19.152 16:41:55 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:26:19.152 16:41:55 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:19.152 16:41:55 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:19.152 16:41:55 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:26:19.152 16:41:55 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:19.152 16:41:55 -- bdevperf/common.sh@9 -- # local rw=read 00:26:19.152 16:41:55 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:19.152 16:41:55 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:19.152 16:41:55 -- bdevperf/common.sh@13 -- # cat 00:26:19.152 00:26:19.152 16:41:55 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:19.152 16:41:55 -- bdevperf/common.sh@19 -- # echo 00:26:19.152 16:41:55 -- bdevperf/common.sh@20 -- # cat 00:26:19.152 16:41:55 -- bdevperf/test_config.sh@18 -- # create_job job0 00:26:19.152 00:26:19.152 16:41:55 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:19.152 16:41:55 -- bdevperf/common.sh@9 -- # local rw= 00:26:19.152 16:41:55 -- bdevperf/common.sh@10 -- # local filename= 00:26:19.152 16:41:55 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:19.152 16:41:55 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:19.152 16:41:55 -- bdevperf/common.sh@19 -- # echo 00:26:19.152 16:41:55 -- bdevperf/common.sh@20 -- # cat 00:26:19.152 16:41:55 -- bdevperf/test_config.sh@19 -- # create_job job1 00:26:19.152 16:41:55 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:19.152 16:41:55 -- bdevperf/common.sh@9 -- # local rw= 00:26:19.152 16:41:55 -- bdevperf/common.sh@10 -- # local filename= 00:26:19.152 16:41:55 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:19.152 16:41:55 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:19.152 16:41:55 -- bdevperf/common.sh@19 -- # echo 00:26:19.152 16:41:55 -- bdevperf/common.sh@20 -- # cat 00:26:19.152 00:26:19.152 16:41:55 -- bdevperf/test_config.sh@20 -- # create_job job2 00:26:19.152 16:41:55 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:19.152 00:26:19.152 16:41:55 -- bdevperf/common.sh@9 -- # local rw= 00:26:19.152 16:41:55 -- bdevperf/common.sh@10 -- # local filename= 00:26:19.152 16:41:55 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:19.152 16:41:55 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:19.152 16:41:55 -- bdevperf/common.sh@19 -- # echo 00:26:19.152 16:41:55 -- bdevperf/common.sh@20 -- # cat 00:26:19.152 16:41:55 -- bdevperf/test_config.sh@21 -- # create_job job3 00:26:19.152 16:41:55 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:19.152 00:26:19.152 16:41:55 -- bdevperf/common.sh@9 -- # local rw= 00:26:19.152 16:41:55 -- bdevperf/common.sh@10 -- # local filename= 00:26:19.152 16:41:55 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:19.152 16:41:55 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:19.152 16:41:55 -- bdevperf/common.sh@19 -- # echo 00:26:19.152 16:41:55 -- bdevperf/common.sh@20 -- # cat 00:26:19.152 16:41:55 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:23.332 16:41:59 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-11 16:41:55.879683] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:23.332 [2024-07-11 16:41:55.879867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136103 ] 00:26:23.332 Using job config with 4 jobs 00:26:23.332 [2024-07-11 16:41:56.030974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.332 [2024-07-11 16:41:56.216296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.332 cpumask for '\''job0'\'' is too big 00:26:23.332 cpumask for '\''job1'\'' is too big 00:26:23.332 cpumask for '\''job2'\'' is too big 00:26:23.332 cpumask for '\''job3'\'' is too big 00:26:23.332 Running I/O for 2 seconds... 00:26:23.332 00:26:23.332 Latency(us) 00:26:23.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33262.57 32.48 0.00 0.00 7690.11 1414.98 11856.06 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33240.11 32.46 0.00 0.00 7682.41 1385.19 10545.34 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33218.71 32.44 0.00 0.00 7675.10 1355.40 9949.56 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33196.83 32.42 0.00 0.00 7667.92 1340.51 9592.09 00:26:23.332 =================================================================================================================== 00:26:23.332 Total : 132918.22 129.80 0.00 0.00 7678.88 1340.51 11856.06' 00:26:23.332 16:41:59 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-11 16:41:55.879683] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:23.332 [2024-07-11 16:41:55.879867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136103 ] 00:26:23.332 Using job config with 4 jobs 00:26:23.332 [2024-07-11 16:41:56.030974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.332 [2024-07-11 16:41:56.216296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.332 cpumask for '\''job0'\'' is too big 00:26:23.332 cpumask for '\''job1'\'' is too big 00:26:23.332 cpumask for '\''job2'\'' is too big 00:26:23.332 cpumask for '\''job3'\'' is too big 00:26:23.332 Running I/O for 2 seconds... 00:26:23.332 00:26:23.332 Latency(us) 00:26:23.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33262.57 32.48 0.00 0.00 7690.11 1414.98 11856.06 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33240.11 32.46 0.00 0.00 7682.41 1385.19 10545.34 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33218.71 32.44 0.00 0.00 7675.10 1355.40 9949.56 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33196.83 32.42 0.00 0.00 7667.92 1340.51 9592.09 00:26:23.332 =================================================================================================================== 00:26:23.332 Total : 132918.22 129.80 0.00 0.00 7678.88 1340.51 11856.06' 00:26:23.332 16:41:59 -- bdevperf/common.sh@32 -- # echo '[2024-07-11 16:41:55.879683] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:23.332 [2024-07-11 16:41:55.879867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136103 ] 00:26:23.332 Using job config with 4 jobs 00:26:23.332 [2024-07-11 16:41:56.030974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.332 [2024-07-11 16:41:56.216296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.332 cpumask for '\''job0'\'' is too big 00:26:23.332 cpumask for '\''job1'\'' is too big 00:26:23.332 cpumask for '\''job2'\'' is too big 00:26:23.332 cpumask for '\''job3'\'' is too big 00:26:23.332 Running I/O for 2 seconds... 00:26:23.332 00:26:23.332 Latency(us) 00:26:23.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33262.57 32.48 0.00 0.00 7690.11 1414.98 11856.06 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33240.11 32.46 0.00 0.00 7682.41 1385.19 10545.34 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33218.71 32.44 0.00 0.00 7675.10 1355.40 9949.56 00:26:23.332 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:23.332 Malloc0 : 2.02 33196.83 32.42 0.00 0.00 7667.92 1340.51 9592.09 00:26:23.332 =================================================================================================================== 00:26:23.332 Total : 132918.22 129.80 0.00 0.00 7678.88 1340.51 11856.06' 00:26:23.332 16:41:59 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:23.332 16:41:59 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:23.332 16:41:59 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:26:23.332 16:41:59 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:23.332 [2024-07-11 16:41:59.820449] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:23.332 [2024-07-11 16:41:59.820606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136171 ] 00:26:23.332 [2024-07-11 16:41:59.974427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.590 [2024-07-11 16:42:00.226947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.155 cpumask for 'job0' is too big 00:26:24.155 cpumask for 'job1' is too big 00:26:24.155 cpumask for 'job2' is too big 00:26:24.155 cpumask for 'job3' is too big 00:26:28.336 16:42:04 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:26:28.336 Running I/O for 2 seconds... 00:26:28.336 00:26:28.336 Latency(us) 00:26:28.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.336 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:28.337 Malloc0 : 2.02 23479.88 22.93 0.00 0.00 10891.62 1824.58 16920.20 00:26:28.337 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:28.337 Malloc0 : 2.02 23458.04 22.91 0.00 0.00 10876.73 1921.40 14834.97 00:26:28.337 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:28.337 Malloc0 : 2.02 23434.85 22.89 0.00 0.00 10862.08 1921.40 12749.73 00:26:28.337 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:28.337 Malloc0 : 2.03 23504.86 22.95 0.00 0.00 10805.94 975.59 12392.26 00:26:28.337 =================================================================================================================== 00:26:28.337 Total : 93877.63 91.68 0.00 0.00 10859.02 975.59 16920.20' 00:26:28.337 16:42:04 -- bdevperf/test_config.sh@27 -- # cleanup 00:26:28.337 16:42:04 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:28.337 16:42:04 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:26:28.337 00:26:28.337 16:42:04 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:28.337 16:42:04 -- bdevperf/common.sh@9 -- # local rw=write 00:26:28.337 16:42:04 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:28.337 16:42:04 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:28.337 16:42:04 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:28.337 16:42:04 -- bdevperf/common.sh@19 -- # echo 00:26:28.337 16:42:04 -- bdevperf/common.sh@20 -- # cat 00:26:28.337 16:42:04 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:26:28.337 00:26:28.337 16:42:04 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:28.337 16:42:04 -- bdevperf/common.sh@9 -- # local rw=write 00:26:28.337 16:42:04 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:28.337 16:42:04 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:28.337 16:42:04 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:28.337 16:42:04 -- bdevperf/common.sh@19 -- # echo 00:26:28.337 16:42:04 -- bdevperf/common.sh@20 -- # cat 00:26:28.337 16:42:04 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:26:28.337 16:42:04 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:28.337 00:26:28.337 16:42:04 -- bdevperf/common.sh@9 -- # local rw=write 00:26:28.337 16:42:04 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:28.337 16:42:04 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:28.337 16:42:04 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:28.337 16:42:04 -- bdevperf/common.sh@19 -- # echo 00:26:28.337 16:42:04 -- bdevperf/common.sh@20 -- # cat 00:26:28.337 16:42:04 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:32.527 16:42:08 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-11 16:42:04.423122] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:32.527 [2024-07-11 16:42:04.424051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136234 ] 00:26:32.528 Using job config with 3 jobs 00:26:32.528 [2024-07-11 16:42:04.602809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.528 [2024-07-11 16:42:04.861369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.528 cpumask for '\''job0'\'' is too big 00:26:32.528 cpumask for '\''job1'\'' is too big 00:26:32.528 cpumask for '\''job2'\'' is too big 00:26:32.528 Running I/O for 2 seconds... 00:26:32.528 00:26:32.528 Latency(us) 00:26:32.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.528 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:32.528 Malloc0 : 2.01 32029.17 31.28 0.00 0.00 7984.58 1437.32 9413.35 00:26:32.528 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:32.528 Malloc0 : 2.02 32000.03 31.25 0.00 0.00 7978.50 1347.96 9472.93 00:26:32.528 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:32.528 Malloc0 : 2.02 31965.31 31.22 0.00 0.00 7974.14 1362.85 9353.77 00:26:32.528 =================================================================================================================== 00:26:32.528 Total : 95994.51 93.74 0.00 0.00 7979.08 1347.96 9472.93' 00:26:32.528 16:42:08 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-11 16:42:04.423122] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:32.528 [2024-07-11 16:42:04.424051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136234 ] 00:26:32.528 Using job config with 3 jobs 00:26:32.528 [2024-07-11 16:42:04.602809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.528 [2024-07-11 16:42:04.861369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.528 cpumask for '\''job0'\'' is too big 00:26:32.528 cpumask for '\''job1'\'' is too big 00:26:32.528 cpumask for '\''job2'\'' is too big 00:26:32.528 Running I/O for 2 seconds... 00:26:32.528 00:26:32.528 Latency(us) 00:26:32.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.528 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:32.528 Malloc0 : 2.01 32029.17 31.28 0.00 0.00 7984.58 1437.32 9413.35 00:26:32.528 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:32.528 Malloc0 : 2.02 32000.03 31.25 0.00 0.00 7978.50 1347.96 9472.93 00:26:32.528 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:32.528 Malloc0 : 2.02 31965.31 31.22 0.00 0.00 7974.14 1362.85 9353.77 00:26:32.528 =================================================================================================================== 00:26:32.528 Total : 95994.51 93.74 0.00 0.00 7979.08 1347.96 9472.93' 00:26:32.528 16:42:08 -- bdevperf/common.sh@32 -- # echo '[2024-07-11 16:42:04.423122] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:32.528 [2024-07-11 16:42:04.424051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136234 ] 00:26:32.528 Using job config with 3 jobs 00:26:32.528 [2024-07-11 16:42:04.602809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.528 [2024-07-11 16:42:04.861369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.528 cpumask for '\''job0'\'' is too big 00:26:32.528 cpumask for '\''job1'\'' is too big 00:26:32.528 cpumask for '\''job2'\'' is too big 00:26:32.528 Running I/O for 2 seconds... 00:26:32.528 00:26:32.528 Latency(us) 00:26:32.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.528 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:32.528 Malloc0 : 2.01 32029.17 31.28 0.00 0.00 7984.58 1437.32 9413.35 00:26:32.528 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:32.528 Malloc0 : 2.02 32000.03 31.25 0.00 0.00 7978.50 1347.96 9472.93 00:26:32.528 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:32.528 Malloc0 : 2.02 31965.31 31.22 0.00 0.00 7974.14 1362.85 9353.77 00:26:32.528 =================================================================================================================== 00:26:32.528 Total : 95994.51 93.74 0.00 0.00 7979.08 1347.96 9472.93' 00:26:32.528 16:42:08 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:32.528 16:42:08 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:32.528 16:42:08 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:26:32.528 16:42:08 -- bdevperf/test_config.sh@35 -- # cleanup 00:26:32.528 16:42:08 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:32.528 16:42:08 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:26:32.528 16:42:08 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:32.528 16:42:08 -- bdevperf/common.sh@9 -- # local rw=rw 00:26:32.528 16:42:08 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:26:32.528 16:42:08 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:32.528 16:42:08 -- bdevperf/common.sh@13 -- # cat 00:26:32.528 16:42:08 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:32.528 00:26:32.528 16:42:08 -- bdevperf/common.sh@19 -- # echo 00:26:32.528 16:42:08 -- bdevperf/common.sh@20 -- # cat 00:26:32.528 16:42:08 -- bdevperf/test_config.sh@38 -- # create_job job0 00:26:32.528 16:42:08 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:32.528 16:42:08 -- bdevperf/common.sh@9 -- # local rw= 00:26:32.528 16:42:08 -- bdevperf/common.sh@10 -- # local filename= 00:26:32.528 16:42:08 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:32.528 00:26:32.528 16:42:08 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:32.528 16:42:08 -- bdevperf/common.sh@19 -- # echo 00:26:32.528 16:42:08 -- bdevperf/common.sh@20 -- # cat 00:26:32.528 16:42:08 -- bdevperf/test_config.sh@39 -- # create_job job1 00:26:32.528 16:42:08 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:32.528 16:42:08 -- bdevperf/common.sh@9 -- # local rw= 00:26:32.528 16:42:08 -- bdevperf/common.sh@10 -- # local filename= 00:26:32.528 16:42:08 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:32.528 16:42:08 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:32.528 00:26:32.528 16:42:08 -- bdevperf/common.sh@19 -- # echo 00:26:32.528 16:42:08 -- bdevperf/common.sh@20 -- # cat 00:26:32.528 16:42:08 -- bdevperf/test_config.sh@40 -- # create_job job2 00:26:32.528 00:26:32.528 16:42:08 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:32.528 16:42:08 -- bdevperf/common.sh@9 -- # local rw= 00:26:32.528 16:42:08 -- bdevperf/common.sh@10 -- # local filename= 00:26:32.528 16:42:08 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:32.528 16:42:08 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:32.528 16:42:08 -- bdevperf/common.sh@19 -- # echo 00:26:32.528 16:42:08 -- bdevperf/common.sh@20 -- # cat 00:26:32.528 16:42:08 -- bdevperf/test_config.sh@41 -- # create_job job3 00:26:32.528 16:42:08 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:32.528 16:42:08 -- bdevperf/common.sh@9 -- # local rw= 00:26:32.528 16:42:08 -- bdevperf/common.sh@10 -- # local filename= 00:26:32.528 00:26:32.528 16:42:08 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:32.528 16:42:08 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:32.528 16:42:08 -- bdevperf/common.sh@19 -- # echo 00:26:32.528 16:42:08 -- bdevperf/common.sh@20 -- # cat 00:26:32.528 16:42:08 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:36.764 16:42:12 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-11 16:42:08.883645] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:36.764 [2024-07-11 16:42:08.884404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136296 ] 00:26:36.764 Using job config with 4 jobs 00:26:36.764 [2024-07-11 16:42:09.056034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.764 [2024-07-11 16:42:09.290266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.764 cpumask for '\''job0'\'' is too big 00:26:36.764 cpumask for '\''job1'\'' is too big 00:26:36.764 cpumask for '\''job2'\'' is too big 00:26:36.764 cpumask for '\''job3'\'' is too big 00:26:36.764 Running I/O for 2 seconds... 00:26:36.764 00:26:36.764 Latency(us) 00:26:36.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.764 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc0 : 2.02 15321.33 14.96 0.00 0.00 16698.50 2964.01 30265.72 00:26:36.764 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc1 : 2.02 15309.74 14.95 0.00 0.00 16699.41 3410.85 30146.56 00:26:36.764 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc0 : 2.02 15299.87 14.94 0.00 0.00 16666.51 2770.39 27405.96 00:26:36.764 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc1 : 2.03 15289.69 14.93 0.00 0.00 16666.77 3306.59 27405.96 00:26:36.764 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc0 : 2.04 15343.60 14.98 0.00 0.00 16558.54 4766.25 23235.49 00:26:36.764 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc1 : 2.04 15332.72 14.97 0.00 0.00 16547.41 5600.35 24069.59 00:26:36.764 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc0 : 2.04 15322.95 14.96 0.00 0.00 16488.91 3738.53 24069.59 00:26:36.764 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc1 : 2.04 15312.96 14.95 0.00 0.00 16480.72 3634.27 22878.02 00:26:36.764 =================================================================================================================== 00:26:36.764 Total : 122532.85 119.66 0.00 0.00 16600.51 2770.39 30265.72' 00:26:36.764 16:42:12 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-11 16:42:08.883645] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:36.764 [2024-07-11 16:42:08.884404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136296 ] 00:26:36.764 Using job config with 4 jobs 00:26:36.764 [2024-07-11 16:42:09.056034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.764 [2024-07-11 16:42:09.290266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.764 cpumask for '\''job0'\'' is too big 00:26:36.764 cpumask for '\''job1'\'' is too big 00:26:36.764 cpumask for '\''job2'\'' is too big 00:26:36.764 cpumask for '\''job3'\'' is too big 00:26:36.764 Running I/O for 2 seconds... 00:26:36.764 00:26:36.764 Latency(us) 00:26:36.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.764 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc0 : 2.02 15321.33 14.96 0.00 0.00 16698.50 2964.01 30265.72 00:26:36.764 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc1 : 2.02 15309.74 14.95 0.00 0.00 16699.41 3410.85 30146.56 00:26:36.764 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc0 : 2.02 15299.87 14.94 0.00 0.00 16666.51 2770.39 27405.96 00:26:36.764 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc1 : 2.03 15289.69 14.93 0.00 0.00 16666.77 3306.59 27405.96 00:26:36.764 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc0 : 2.04 15343.60 14.98 0.00 0.00 16558.54 4766.25 23235.49 00:26:36.764 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc1 : 2.04 15332.72 14.97 0.00 0.00 16547.41 5600.35 24069.59 00:26:36.764 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc0 : 2.04 15322.95 14.96 0.00 0.00 16488.91 3738.53 24069.59 00:26:36.764 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.764 Malloc1 : 2.04 15312.96 14.95 0.00 0.00 16480.72 3634.27 22878.02 00:26:36.764 =================================================================================================================== 00:26:36.764 Total : 122532.85 119.66 0.00 0.00 16600.51 2770.39 30265.72' 00:26:36.764 16:42:12 -- bdevperf/common.sh@32 -- # echo '[2024-07-11 16:42:08.883645] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:36.764 [2024-07-11 16:42:08.884404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136296 ] 00:26:36.764 Using job config with 4 jobs 00:26:36.764 [2024-07-11 16:42:09.056034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.764 [2024-07-11 16:42:09.290266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.765 cpumask for '\''job0'\'' is too big 00:26:36.765 cpumask for '\''job1'\'' is too big 00:26:36.765 cpumask for '\''job2'\'' is too big 00:26:36.765 cpumask for '\''job3'\'' is too big 00:26:36.765 Running I/O for 2 seconds... 00:26:36.765 00:26:36.765 Latency(us) 00:26:36.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.765 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.765 Malloc0 : 2.02 15321.33 14.96 0.00 0.00 16698.50 2964.01 30265.72 00:26:36.765 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.765 Malloc1 : 2.02 15309.74 14.95 0.00 0.00 16699.41 3410.85 30146.56 00:26:36.765 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.765 Malloc0 : 2.02 15299.87 14.94 0.00 0.00 16666.51 2770.39 27405.96 00:26:36.765 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.765 Malloc1 : 2.03 15289.69 14.93 0.00 0.00 16666.77 3306.59 27405.96 00:26:36.765 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.765 Malloc0 : 2.04 15343.60 14.98 0.00 0.00 16558.54 4766.25 23235.49 00:26:36.765 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.765 Malloc1 : 2.04 15332.72 14.97 0.00 0.00 16547.41 5600.35 24069.59 00:26:36.765 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.765 Malloc0 : 2.04 15322.95 14.96 0.00 0.00 16488.91 3738.53 24069.59 00:26:36.765 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:36.765 Malloc1 : 2.04 15312.96 14.95 0.00 0.00 16480.72 3634.27 22878.02 00:26:36.765 =================================================================================================================== 00:26:36.765 Total : 122532.85 119.66 0.00 0.00 16600.51 2770.39 30265.72' 00:26:36.765 16:42:12 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:36.765 16:42:12 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:36.765 16:42:12 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:26:36.765 16:42:12 -- bdevperf/test_config.sh@44 -- # cleanup 00:26:36.765 16:42:12 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:36.765 16:42:12 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:36.765 00:26:36.765 real 0m17.192s 00:26:36.765 user 0m15.512s 00:26:36.765 sys 0m1.106s 00:26:36.765 ************************************ 00:26:36.765 END TEST bdevperf_config 00:26:36.765 ************************************ 00:26:36.765 16:42:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:36.765 16:42:12 -- common/autotest_common.sh@10 -- # set +x 00:26:36.765 16:42:12 -- spdk/autotest.sh@198 -- # uname -s 00:26:36.765 16:42:12 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:26:36.765 16:42:12 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:36.765 16:42:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:36.765 16:42:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:36.765 16:42:12 -- common/autotest_common.sh@10 -- # set +x 00:26:36.765 ************************************ 00:26:36.765 START TEST reactor_set_interrupt 00:26:36.765 ************************************ 00:26:36.765 16:42:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:36.765 * Looking for test storage... 00:26:36.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:36.765 16:42:13 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:36.765 16:42:13 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:36.765 16:42:13 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:36.765 16:42:13 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:36.765 16:42:13 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:36.765 16:42:13 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:36.765 16:42:13 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:36.765 16:42:13 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:36.765 16:42:13 -- common/autotest_common.sh@34 -- # set -e 00:26:36.765 16:42:13 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:36.765 16:42:13 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:36.765 16:42:13 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:36.765 16:42:13 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:36.765 16:42:13 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:36.765 16:42:13 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:26:36.765 16:42:13 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:26:36.765 16:42:13 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:26:36.765 16:42:13 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:26:36.765 16:42:13 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:26:36.765 16:42:13 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:26:36.765 16:42:13 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:26:36.765 16:42:13 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:26:36.765 16:42:13 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:26:36.765 16:42:13 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:26:36.765 16:42:13 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:26:36.765 16:42:13 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:26:36.765 16:42:13 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:26:36.765 16:42:13 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:26:36.765 16:42:13 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:26:36.765 16:42:13 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:26:36.765 16:42:13 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:26:36.765 16:42:13 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:26:36.765 16:42:13 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:26:36.765 16:42:13 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:26:36.765 16:42:13 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:36.765 16:42:13 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:26:36.765 16:42:13 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:26:36.765 16:42:13 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:26:36.765 16:42:13 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:26:36.765 16:42:13 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:26:36.765 16:42:13 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:36.765 16:42:13 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:26:36.765 16:42:13 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:26:36.765 16:42:13 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:26:36.765 16:42:13 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:26:36.765 16:42:13 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:26:36.765 16:42:13 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:26:36.765 16:42:13 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:26:36.765 16:42:13 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:26:36.765 16:42:13 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:36.765 16:42:13 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:26:36.765 16:42:13 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:26:36.765 16:42:13 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:26:36.765 16:42:13 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:26:36.765 16:42:13 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:26:36.765 16:42:13 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:26:36.765 16:42:13 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:26:36.765 16:42:13 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:36.765 16:42:13 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:26:36.765 16:42:13 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:26:36.765 16:42:13 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:26:36.765 16:42:13 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:26:36.765 16:42:13 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:26:36.765 16:42:13 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:36.765 16:42:13 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:26:36.765 16:42:13 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:26:36.765 16:42:13 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:26:36.765 16:42:13 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:36.765 16:42:13 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:26:36.765 16:42:13 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:26:36.765 16:42:13 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:36.765 16:42:13 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:36.765 16:42:13 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:36.765 16:42:13 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:26:36.765 16:42:13 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:26:36.765 16:42:13 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:26:36.765 16:42:13 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:26:36.765 16:42:13 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:26:36.765 16:42:13 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:26:36.765 16:42:13 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:26:36.765 16:42:13 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:36.765 16:42:13 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:26:36.765 16:42:13 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:26:36.765 16:42:13 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:26:36.765 16:42:13 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:26:36.765 16:42:13 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:26:36.765 16:42:13 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:36.765 16:42:13 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:26:36.765 16:42:13 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:26:36.765 16:42:13 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:26:36.765 16:42:13 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:26:36.765 16:42:13 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:36.765 16:42:13 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:36.765 16:42:13 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:36.765 16:42:13 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:36.765 16:42:13 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:36.765 16:42:13 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:36.765 16:42:13 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:36.765 16:42:13 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:36.765 16:42:13 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:36.765 16:42:13 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:36.765 16:42:13 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:36.765 16:42:13 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:36.765 16:42:13 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:36.765 16:42:13 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:36.765 16:42:13 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:36.765 16:42:13 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:36.765 16:42:13 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:36.765 #define SPDK_CONFIG_H 00:26:36.765 #define SPDK_CONFIG_APPS 1 00:26:36.765 #define SPDK_CONFIG_ARCH native 00:26:36.765 #define SPDK_CONFIG_ASAN 1 00:26:36.765 #undef SPDK_CONFIG_AVAHI 00:26:36.765 #undef SPDK_CONFIG_CET 00:26:36.765 #define SPDK_CONFIG_COVERAGE 1 00:26:36.765 #define SPDK_CONFIG_CROSS_PREFIX 00:26:36.766 #undef SPDK_CONFIG_CRYPTO 00:26:36.766 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:36.766 #undef SPDK_CONFIG_CUSTOMOCF 00:26:36.766 #undef SPDK_CONFIG_DAOS 00:26:36.766 #define SPDK_CONFIG_DAOS_DIR 00:26:36.766 #define SPDK_CONFIG_DEBUG 1 00:26:36.766 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:36.766 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:36.766 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:36.766 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:36.766 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:36.766 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:36.766 #define SPDK_CONFIG_EXAMPLES 1 00:26:36.766 #undef SPDK_CONFIG_FC 00:26:36.766 #define SPDK_CONFIG_FC_PATH 00:26:36.766 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:36.766 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:36.766 #undef SPDK_CONFIG_FUSE 00:26:36.766 #undef SPDK_CONFIG_FUZZER 00:26:36.766 #define SPDK_CONFIG_FUZZER_LIB 00:26:36.766 #undef SPDK_CONFIG_GOLANG 00:26:36.766 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:36.766 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:36.766 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:36.766 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:36.766 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:36.766 #define SPDK_CONFIG_IDXD 1 00:26:36.766 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:36.766 #undef SPDK_CONFIG_IPSEC_MB 00:26:36.766 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:36.766 #define SPDK_CONFIG_ISAL 1 00:26:36.766 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:36.766 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:36.766 #define SPDK_CONFIG_LIBDIR 00:26:36.766 #undef SPDK_CONFIG_LTO 00:26:36.766 #define SPDK_CONFIG_MAX_LCORES 00:26:36.766 #define SPDK_CONFIG_NVME_CUSE 1 00:26:36.766 #undef SPDK_CONFIG_OCF 00:26:36.766 #define SPDK_CONFIG_OCF_PATH 00:26:36.766 #define SPDK_CONFIG_OPENSSL_PATH 00:26:36.766 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:36.766 #undef SPDK_CONFIG_PGO_USE 00:26:36.766 #define SPDK_CONFIG_PREFIX /usr/local 00:26:36.766 #define SPDK_CONFIG_RAID5F 1 00:26:36.766 #undef SPDK_CONFIG_RBD 00:26:36.766 #define SPDK_CONFIG_RDMA 1 00:26:36.766 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:36.766 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:36.766 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:36.766 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:36.766 #undef SPDK_CONFIG_SHARED 00:26:36.766 #undef SPDK_CONFIG_SMA 00:26:36.766 #define SPDK_CONFIG_TESTS 1 00:26:36.766 #undef SPDK_CONFIG_TSAN 00:26:36.766 #undef SPDK_CONFIG_UBLK 00:26:36.766 #define SPDK_CONFIG_UBSAN 1 00:26:36.766 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:36.766 #undef SPDK_CONFIG_URING 00:26:36.766 #define SPDK_CONFIG_URING_PATH 00:26:36.766 #undef SPDK_CONFIG_URING_ZNS 00:26:36.766 #undef SPDK_CONFIG_USDT 00:26:36.766 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:36.766 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:36.766 #undef SPDK_CONFIG_VFIO_USER 00:26:36.766 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:36.766 #define SPDK_CONFIG_VHOST 1 00:26:36.766 #define SPDK_CONFIG_VIRTIO 1 00:26:36.766 #undef SPDK_CONFIG_VTUNE 00:26:36.766 #define SPDK_CONFIG_VTUNE_DIR 00:26:36.766 #define SPDK_CONFIG_WERROR 1 00:26:36.766 #define SPDK_CONFIG_WPDK_DIR 00:26:36.766 #undef SPDK_CONFIG_XNVME 00:26:36.766 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:36.766 16:42:13 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:36.766 16:42:13 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:36.766 16:42:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.766 16:42:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.766 16:42:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.766 16:42:13 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.766 16:42:13 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.766 16:42:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.766 16:42:13 -- paths/export.sh@5 -- # export PATH 00:26:36.766 16:42:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:36.766 16:42:13 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:36.766 16:42:13 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:36.766 16:42:13 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:36.766 16:42:13 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:36.766 16:42:13 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:36.766 16:42:13 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:36.766 16:42:13 -- pm/common@16 -- # TEST_TAG=N/A 00:26:36.766 16:42:13 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:36.766 16:42:13 -- common/autotest_common.sh@52 -- # : 1 00:26:36.766 16:42:13 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:36.766 16:42:13 -- common/autotest_common.sh@56 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:36.766 16:42:13 -- common/autotest_common.sh@58 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:36.766 16:42:13 -- common/autotest_common.sh@60 -- # : 1 00:26:36.766 16:42:13 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:36.766 16:42:13 -- common/autotest_common.sh@62 -- # : 1 00:26:36.766 16:42:13 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:36.766 16:42:13 -- common/autotest_common.sh@64 -- # : 00:26:36.766 16:42:13 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:36.766 16:42:13 -- common/autotest_common.sh@66 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:36.766 16:42:13 -- common/autotest_common.sh@68 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:36.766 16:42:13 -- common/autotest_common.sh@70 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:36.766 16:42:13 -- common/autotest_common.sh@72 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:36.766 16:42:13 -- common/autotest_common.sh@74 -- # : 1 00:26:36.766 16:42:13 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:36.766 16:42:13 -- common/autotest_common.sh@76 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:36.766 16:42:13 -- common/autotest_common.sh@78 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:36.766 16:42:13 -- common/autotest_common.sh@80 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:36.766 16:42:13 -- common/autotest_common.sh@82 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:36.766 16:42:13 -- common/autotest_common.sh@84 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:36.766 16:42:13 -- common/autotest_common.sh@86 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:36.766 16:42:13 -- common/autotest_common.sh@88 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:36.766 16:42:13 -- common/autotest_common.sh@90 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:36.766 16:42:13 -- common/autotest_common.sh@92 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:36.766 16:42:13 -- common/autotest_common.sh@94 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:36.766 16:42:13 -- common/autotest_common.sh@96 -- # : rdma 00:26:36.766 16:42:13 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:36.766 16:42:13 -- common/autotest_common.sh@98 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:36.766 16:42:13 -- common/autotest_common.sh@100 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:36.766 16:42:13 -- common/autotest_common.sh@102 -- # : 1 00:26:36.766 16:42:13 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:36.766 16:42:13 -- common/autotest_common.sh@104 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:36.766 16:42:13 -- common/autotest_common.sh@106 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:36.766 16:42:13 -- common/autotest_common.sh@108 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:36.766 16:42:13 -- common/autotest_common.sh@110 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:36.766 16:42:13 -- common/autotest_common.sh@112 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:36.766 16:42:13 -- common/autotest_common.sh@114 -- # : 1 00:26:36.766 16:42:13 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:36.766 16:42:13 -- common/autotest_common.sh@116 -- # : 1 00:26:36.766 16:42:13 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:36.766 16:42:13 -- common/autotest_common.sh@118 -- # : 00:26:36.766 16:42:13 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:36.766 16:42:13 -- common/autotest_common.sh@120 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:36.766 16:42:13 -- common/autotest_common.sh@122 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:36.766 16:42:13 -- common/autotest_common.sh@124 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:36.766 16:42:13 -- common/autotest_common.sh@126 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:36.766 16:42:13 -- common/autotest_common.sh@128 -- # : 0 00:26:36.766 16:42:13 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:36.766 16:42:13 -- common/autotest_common.sh@130 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:36.767 16:42:13 -- common/autotest_common.sh@132 -- # : 00:26:36.767 16:42:13 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:36.767 16:42:13 -- common/autotest_common.sh@134 -- # : true 00:26:36.767 16:42:13 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:36.767 16:42:13 -- common/autotest_common.sh@136 -- # : 1 00:26:36.767 16:42:13 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:36.767 16:42:13 -- common/autotest_common.sh@138 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:36.767 16:42:13 -- common/autotest_common.sh@140 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:36.767 16:42:13 -- common/autotest_common.sh@142 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:36.767 16:42:13 -- common/autotest_common.sh@144 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:36.767 16:42:13 -- common/autotest_common.sh@146 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:36.767 16:42:13 -- common/autotest_common.sh@148 -- # : 00:26:36.767 16:42:13 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:36.767 16:42:13 -- common/autotest_common.sh@150 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:36.767 16:42:13 -- common/autotest_common.sh@152 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:36.767 16:42:13 -- common/autotest_common.sh@154 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:36.767 16:42:13 -- common/autotest_common.sh@156 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:36.767 16:42:13 -- common/autotest_common.sh@158 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:36.767 16:42:13 -- common/autotest_common.sh@160 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:36.767 16:42:13 -- common/autotest_common.sh@163 -- # : 00:26:36.767 16:42:13 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:36.767 16:42:13 -- common/autotest_common.sh@165 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:36.767 16:42:13 -- common/autotest_common.sh@167 -- # : 0 00:26:36.767 16:42:13 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:36.767 16:42:13 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:36.767 16:42:13 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:36.767 16:42:13 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:36.767 16:42:13 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:36.767 16:42:13 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:36.767 16:42:13 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:36.767 16:42:13 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:36.767 16:42:13 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:36.767 16:42:13 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:36.767 16:42:13 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:36.767 16:42:13 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:36.767 16:42:13 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:36.767 16:42:13 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:36.767 16:42:13 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:36.767 16:42:13 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:36.767 16:42:13 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:36.767 16:42:13 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:36.767 16:42:13 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:36.767 16:42:13 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:36.767 16:42:13 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:36.767 16:42:13 -- common/autotest_common.sh@196 -- # cat 00:26:36.767 16:42:13 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:36.767 16:42:13 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:36.767 16:42:13 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:36.767 16:42:13 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:36.767 16:42:13 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:36.767 16:42:13 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:36.767 16:42:13 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:36.767 16:42:13 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:36.767 16:42:13 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:36.767 16:42:13 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:36.767 16:42:13 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:36.767 16:42:13 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:36.767 16:42:13 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:36.767 16:42:13 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:36.767 16:42:13 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:36.767 16:42:13 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:36.767 16:42:13 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:36.767 16:42:13 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:36.767 16:42:13 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:36.767 16:42:13 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:26:36.767 16:42:13 -- common/autotest_common.sh@249 -- # export valgrind= 00:26:36.767 16:42:13 -- common/autotest_common.sh@249 -- # valgrind= 00:26:36.767 16:42:13 -- common/autotest_common.sh@255 -- # uname -s 00:26:36.767 16:42:13 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:26:36.767 16:42:13 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:26:36.767 16:42:13 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:26:36.767 16:42:13 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:26:36.767 16:42:13 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:36.767 16:42:13 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:36.767 16:42:13 -- common/autotest_common.sh@265 -- # MAKE=make 00:26:36.767 16:42:13 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:26:36.767 16:42:13 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:26:36.767 16:42:13 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:26:36.767 16:42:13 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:36.767 16:42:13 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:26:36.767 16:42:13 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:26:36.767 16:42:13 -- common/autotest_common.sh@309 -- # [[ -z 136400 ]] 00:26:36.767 16:42:13 -- common/autotest_common.sh@309 -- # kill -0 136400 00:26:36.767 16:42:13 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:26:36.767 16:42:13 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:26:36.767 16:42:13 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:26:36.767 16:42:13 -- common/autotest_common.sh@322 -- # local mount target_dir 00:26:36.767 16:42:13 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:26:36.767 16:42:13 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:26:36.767 16:42:13 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:26:36.767 16:42:13 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:26:36.767 16:42:13 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.avZo3q 00:26:36.767 16:42:13 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:36.767 16:42:13 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:26:36.767 16:42:13 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:26:36.767 16:42:13 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.avZo3q/tests/interrupt /tmp/spdk.avZo3q 00:26:36.767 16:42:13 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:26:36.767 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.767 16:42:13 -- common/autotest_common.sh@318 -- # df -T 00:26:36.767 16:42:13 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:26:36.767 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:26:36.767 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:26:36.767 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224461824 00:26:36.767 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224461824 00:26:36.767 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:36.767 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.767 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:36.767 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:36.767 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:26:36.767 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:26:36.767 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:26:36.767 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.767 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:26:36.767 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:26:36.767 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=10616201216 00:26:36.767 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:26:36.767 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=9983815680 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=6269968384 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272561152 00:26:36.768 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:26:36.768 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272561152 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272561152 00:26:36.768 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:26:36.768 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:36.768 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:26:36.768 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:26:36.768 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:26:36.768 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=97899995136 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:26:36.768 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=1802784768 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:26:36.768 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:26:36.768 16:42:13 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:36.768 16:42:13 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:36.768 16:42:13 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:36.768 16:42:13 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:36.768 16:42:13 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:26:36.768 * Looking for test storage... 00:26:36.768 16:42:13 -- common/autotest_common.sh@359 -- # local target_space new_size 00:26:36.768 16:42:13 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:26:36.768 16:42:13 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:36.768 16:42:13 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:36.768 16:42:13 -- common/autotest_common.sh@363 -- # mount=/ 00:26:36.768 16:42:13 -- common/autotest_common.sh@365 -- # target_space=10616201216 00:26:36.768 16:42:13 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:26:36.768 16:42:13 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:26:36.768 16:42:13 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:26:36.768 16:42:13 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:26:36.768 16:42:13 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:26:36.768 16:42:13 -- common/autotest_common.sh@372 -- # new_size=12198408192 00:26:36.768 16:42:13 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:36.768 16:42:13 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:36.768 16:42:13 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:36.768 16:42:13 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:36.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:36.768 16:42:13 -- common/autotest_common.sh@380 -- # return 0 00:26:36.768 16:42:13 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:26:36.768 16:42:13 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:26:36.768 16:42:13 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:36.768 16:42:13 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:36.768 16:42:13 -- common/autotest_common.sh@1672 -- # true 00:26:36.768 16:42:13 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:26:36.768 16:42:13 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:36.768 16:42:13 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:36.768 16:42:13 -- common/autotest_common.sh@27 -- # exec 00:26:36.768 16:42:13 -- common/autotest_common.sh@29 -- # exec 00:26:36.768 16:42:13 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:36.768 16:42:13 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:36.768 16:42:13 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:36.768 16:42:13 -- common/autotest_common.sh@18 -- # set -x 00:26:36.768 16:42:13 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:36.768 16:42:13 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:36.768 16:42:13 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:36.768 16:42:13 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:36.768 16:42:13 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:36.768 16:42:13 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:36.768 16:42:13 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:36.768 16:42:13 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:36.768 16:42:13 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:26:36.769 16:42:13 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.769 16:42:13 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:36.769 16:42:13 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=136440 00:26:36.769 16:42:13 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:36.769 16:42:13 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 136440 /var/tmp/spdk.sock 00:26:36.769 16:42:13 -- common/autotest_common.sh@819 -- # '[' -z 136440 ']' 00:26:36.769 16:42:13 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:36.769 16:42:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:36.769 16:42:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:36.769 16:42:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:36.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:36.769 16:42:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:36.769 16:42:13 -- common/autotest_common.sh@10 -- # set +x 00:26:36.769 [2024-07-11 16:42:13.194169] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:36.769 [2024-07-11 16:42:13.194964] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136440 ] 00:26:36.769 [2024-07-11 16:42:13.369200] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:36.769 [2024-07-11 16:42:13.528365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.769 [2024-07-11 16:42:13.528466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.769 [2024-07-11 16:42:13.528459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:37.026 [2024-07-11 16:42:13.779612] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:37.593 16:42:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:37.593 16:42:14 -- common/autotest_common.sh@852 -- # return 0 00:26:37.593 16:42:14 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:26:37.593 16:42:14 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:37.851 Malloc0 00:26:37.851 Malloc1 00:26:37.851 Malloc2 00:26:37.851 16:42:14 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:26:37.851 16:42:14 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:37.851 16:42:14 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:37.851 16:42:14 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:37.851 5000+0 records in 00:26:37.851 5000+0 records out 00:26:37.851 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0128053 s, 800 MB/s 00:26:37.851 16:42:14 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:38.109 AIO0 00:26:38.109 16:42:14 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 136440 00:26:38.109 16:42:14 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 136440 without_thd 00:26:38.109 16:42:14 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=136440 00:26:38.109 16:42:14 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:26:38.109 16:42:14 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:38.109 16:42:14 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:38.109 16:42:14 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:38.109 16:42:14 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:38.109 16:42:14 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:38.109 16:42:14 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:38.109 16:42:14 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:38.109 16:42:14 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:38.367 16:42:15 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:38.367 16:42:15 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:38.367 16:42:15 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:38.367 16:42:15 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:38.367 16:42:15 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:38.367 16:42:15 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:38.367 16:42:15 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:38.367 16:42:15 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:38.367 16:42:15 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:38.626 16:42:15 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:38.626 spdk_thread ids are 1 on reactor0. 00:26:38.626 16:42:15 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:38.626 16:42:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:38.626 16:42:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136440 0 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136440 0 idle 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@33 -- # local pid=136440 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136440 -w 256 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136440 root 20 0 20.1t 145652 28724 S 0.0 1.2 0:00.64 reactor_0' 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@48 -- # echo 136440 root 20 0 20.1t 145652 28724 S 0.0 1.2 0:00.64 reactor_0 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:38.626 16:42:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:38.626 16:42:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136440 1 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136440 1 idle 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@33 -- # local pid=136440 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136440 -w 256 00:26:38.626 16:42:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136452 root 20 0 20.1t 145652 28724 S 0.0 1.2 0:00.00 reactor_1' 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@48 -- # echo 136452 root 20 0 20.1t 145652 28724 S 0.0 1.2 0:00.00 reactor_1 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:38.884 16:42:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:38.884 16:42:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136440 2 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136440 2 idle 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@33 -- # local pid=136440 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136440 -w 256 00:26:38.884 16:42:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:39.143 16:42:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136453 root 20 0 20.1t 145652 28724 S 0.0 1.2 0:00.00 reactor_2' 00:26:39.143 16:42:15 -- interrupt/interrupt_common.sh@48 -- # echo 136453 root 20 0 20.1t 145652 28724 S 0.0 1.2 0:00.00 reactor_2 00:26:39.143 16:42:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:39.143 16:42:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:39.143 16:42:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:39.143 16:42:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:39.143 16:42:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:39.143 16:42:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:39.143 16:42:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:39.143 16:42:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:39.143 16:42:15 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:26:39.143 16:42:15 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:26:39.143 16:42:15 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:26:39.143 [2024-07-11 16:42:15.943931] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:39.401 16:42:15 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:39.401 [2024-07-11 16:42:16.180012] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:39.401 [2024-07-11 16:42:16.180454] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:39.401 16:42:16 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:39.660 [2024-07-11 16:42:16.367876] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:39.660 [2024-07-11 16:42:16.368385] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:39.660 16:42:16 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:39.660 16:42:16 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136440 0 00:26:39.660 16:42:16 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136440 0 busy 00:26:39.660 16:42:16 -- interrupt/interrupt_common.sh@33 -- # local pid=136440 00:26:39.660 16:42:16 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:39.660 16:42:16 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:39.660 16:42:16 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:39.660 16:42:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:39.660 16:42:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:39.660 16:42:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:39.660 16:42:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136440 -w 256 00:26:39.660 16:42:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136440 root 20 0 20.1t 145776 28724 R 99.9 1.2 0:01.00 reactor_0' 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@48 -- # echo 136440 root 20 0 20.1t 145776 28724 R 99.9 1.2 0:01.00 reactor_0 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:39.918 16:42:16 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:39.918 16:42:16 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136440 2 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136440 2 busy 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@33 -- # local pid=136440 00:26:39.918 16:42:16 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136440 -w 256 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136453 root 20 0 20.1t 145776 28724 R 99.9 1.2 0:00.33 reactor_2' 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@48 -- # echo 136453 root 20 0 20.1t 145776 28724 R 99.9 1.2 0:00.33 reactor_2 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:39.919 16:42:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:39.919 16:42:16 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:40.177 [2024-07-11 16:42:16.891921] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:40.178 [2024-07-11 16:42:16.892455] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:40.178 16:42:16 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:26:40.178 16:42:16 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 136440 2 00:26:40.178 16:42:16 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136440 2 idle 00:26:40.178 16:42:16 -- interrupt/interrupt_common.sh@33 -- # local pid=136440 00:26:40.178 16:42:16 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:40.178 16:42:16 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:40.178 16:42:16 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:40.178 16:42:16 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:40.178 16:42:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:40.178 16:42:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:40.178 16:42:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:40.178 16:42:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136440 -w 256 00:26:40.178 16:42:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:40.436 16:42:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136453 root 20 0 20.1t 145844 28724 S 0.0 1.2 0:00.52 reactor_2' 00:26:40.436 16:42:17 -- interrupt/interrupt_common.sh@48 -- # echo 136453 root 20 0 20.1t 145844 28724 S 0.0 1.2 0:00.52 reactor_2 00:26:40.436 16:42:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:40.436 16:42:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:40.436 16:42:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:40.436 16:42:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:40.436 16:42:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:40.436 16:42:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:40.436 16:42:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:40.436 16:42:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:40.436 16:42:17 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:40.694 [2024-07-11 16:42:17.287901] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:40.694 [2024-07-11 16:42:17.288392] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:40.694 16:42:17 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:26:40.694 16:42:17 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:26:40.694 16:42:17 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:26:40.694 [2024-07-11 16:42:17.475901] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:40.694 16:42:17 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 136440 0 00:26:40.694 16:42:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136440 0 idle 00:26:40.694 16:42:17 -- interrupt/interrupt_common.sh@33 -- # local pid=136440 00:26:40.694 16:42:17 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:40.694 16:42:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:40.694 16:42:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:40.694 16:42:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:40.694 16:42:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:40.694 16:42:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:40.694 16:42:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:40.694 16:42:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136440 -w 256 00:26:40.694 16:42:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:40.953 16:42:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136440 root 20 0 20.1t 145936 28724 S 0.0 1.2 0:01.76 reactor_0' 00:26:40.953 16:42:17 -- interrupt/interrupt_common.sh@48 -- # echo 136440 root 20 0 20.1t 145936 28724 S 0.0 1.2 0:01.76 reactor_0 00:26:40.953 16:42:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:40.953 16:42:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:40.953 16:42:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:40.953 16:42:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:40.953 16:42:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:40.953 16:42:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:40.953 16:42:17 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:40.953 16:42:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:40.953 16:42:17 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:40.953 16:42:17 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:26:40.953 16:42:17 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:26:40.953 16:42:17 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 136440 00:26:40.953 16:42:17 -- common/autotest_common.sh@926 -- # '[' -z 136440 ']' 00:26:40.953 16:42:17 -- common/autotest_common.sh@930 -- # kill -0 136440 00:26:40.953 16:42:17 -- common/autotest_common.sh@931 -- # uname 00:26:40.953 16:42:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:40.953 16:42:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136440 00:26:40.953 16:42:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:40.953 16:42:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:40.953 16:42:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136440' 00:26:40.953 killing process with pid 136440 00:26:40.953 16:42:17 -- common/autotest_common.sh@945 -- # kill 136440 00:26:40.953 16:42:17 -- common/autotest_common.sh@950 -- # wait 136440 00:26:42.326 16:42:18 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:26:42.326 16:42:18 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:42.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:42.326 16:42:18 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:26:42.326 16:42:18 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.326 16:42:18 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:42.326 16:42:18 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=136592 00:26:42.326 16:42:18 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:42.326 16:42:18 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:42.326 16:42:18 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 136592 /var/tmp/spdk.sock 00:26:42.326 16:42:18 -- common/autotest_common.sh@819 -- # '[' -z 136592 ']' 00:26:42.326 16:42:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:42.326 16:42:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:42.326 16:42:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:42.326 16:42:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:42.326 16:42:18 -- common/autotest_common.sh@10 -- # set +x 00:26:42.326 [2024-07-11 16:42:18.872385] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:42.326 [2024-07-11 16:42:18.873021] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136592 ] 00:26:42.326 [2024-07-11 16:42:19.089635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:42.585 [2024-07-11 16:42:19.371778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.585 [2024-07-11 16:42:19.371940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.585 [2024-07-11 16:42:19.371961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.152 [2024-07-11 16:42:19.763487] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:43.152 16:42:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:43.152 16:42:19 -- common/autotest_common.sh@852 -- # return 0 00:26:43.152 16:42:19 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:26:43.152 16:42:19 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:43.719 Malloc0 00:26:43.719 Malloc1 00:26:43.719 Malloc2 00:26:43.719 16:42:20 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:26:43.719 16:42:20 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:43.719 16:42:20 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:43.719 16:42:20 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:43.719 5000+0 records in 00:26:43.719 5000+0 records out 00:26:43.719 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0286472 s, 357 MB/s 00:26:43.719 16:42:20 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:43.977 AIO0 00:26:43.978 16:42:20 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 136592 00:26:43.978 16:42:20 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 136592 00:26:43.978 16:42:20 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=136592 00:26:43.978 16:42:20 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:26:43.978 16:42:20 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:43.978 16:42:20 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:43.978 16:42:20 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:43.978 16:42:20 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:43.978 16:42:20 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:43.978 16:42:20 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:43.978 16:42:20 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:43.978 16:42:20 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:44.236 16:42:20 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:44.236 16:42:20 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:44.236 16:42:20 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:44.236 16:42:20 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:44.236 16:42:20 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:44.236 16:42:20 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:44.236 16:42:20 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:44.236 16:42:20 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:44.236 16:42:20 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:44.494 16:42:21 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:44.494 16:42:21 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:44.494 spdk_thread ids are 1 on reactor0. 00:26:44.494 16:42:21 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:44.494 16:42:21 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136592 0 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136592 0 idle 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@33 -- # local pid=136592 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136592 -w 256 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136592 root 20 0 20.1t 145540 28672 S 6.7 1.2 0:01.04 reactor_0' 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@48 -- # echo 136592 root 20 0 20.1t 145540 28672 S 6.7 1.2 0:01.04 reactor_0 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:44.494 16:42:21 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:44.494 16:42:21 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136592 1 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136592 1 idle 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@33 -- # local pid=136592 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136592 -w 256 00:26:44.494 16:42:21 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136596 root 20 0 20.1t 145540 28672 S 0.0 1.2 0:00.00 reactor_1' 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@48 -- # echo 136596 root 20 0 20.1t 145540 28672 S 0.0 1.2 0:00.00 reactor_1 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:44.754 16:42:21 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:44.754 16:42:21 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 136592 2 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136592 2 idle 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@33 -- # local pid=136592 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136592 -w 256 00:26:44.754 16:42:21 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:45.013 16:42:21 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136597 root 20 0 20.1t 145540 28672 S 0.0 1.2 0:00.00 reactor_2' 00:26:45.013 16:42:21 -- interrupt/interrupt_common.sh@48 -- # echo 136597 root 20 0 20.1t 145540 28672 S 0.0 1.2 0:00.00 reactor_2 00:26:45.013 16:42:21 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:45.013 16:42:21 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:45.013 16:42:21 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:45.013 16:42:21 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:45.013 16:42:21 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:45.013 16:42:21 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:45.013 16:42:21 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:45.013 16:42:21 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:45.013 16:42:21 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:26:45.013 16:42:21 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:45.272 [2024-07-11 16:42:21.859779] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:45.272 [2024-07-11 16:42:21.860325] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:26:45.272 [2024-07-11 16:42:21.860719] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:45.272 16:42:21 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:45.531 [2024-07-11 16:42:22.115642] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:45.531 [2024-07-11 16:42:22.116131] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:45.531 16:42:22 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:45.531 16:42:22 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136592 0 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136592 0 busy 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@33 -- # local pid=136592 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136592 -w 256 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136592 root 20 0 20.1t 145624 28672 R 93.3 1.2 0:01.46 reactor_0' 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@48 -- # echo 136592 root 20 0 20.1t 145624 28672 R 93.3 1.2 0:01.46 reactor_0 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.3 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:45.531 16:42:22 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:45.531 16:42:22 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 136592 2 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 136592 2 busy 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@33 -- # local pid=136592 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136592 -w 256 00:26:45.531 16:42:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:45.790 16:42:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136597 root 20 0 20.1t 145624 28672 R 99.9 1.2 0:00.34 reactor_2' 00:26:45.790 16:42:22 -- interrupt/interrupt_common.sh@48 -- # echo 136597 root 20 0 20.1t 145624 28672 R 99.9 1.2 0:00.34 reactor_2 00:26:45.790 16:42:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:45.790 16:42:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:45.790 16:42:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:45.790 16:42:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:45.790 16:42:22 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:45.790 16:42:22 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:45.790 16:42:22 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:45.790 16:42:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:45.790 16:42:22 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:46.048 [2024-07-11 16:42:22.659775] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:46.048 [2024-07-11 16:42:22.660143] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:46.048 16:42:22 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:26:46.048 16:42:22 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 136592 2 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136592 2 idle 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@33 -- # local pid=136592 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136592 -w 256 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136597 root 20 0 20.1t 145700 28672 S 0.0 1.2 0:00.54 reactor_2' 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@48 -- # echo 136597 root 20 0 20.1t 145700 28672 S 0.0 1.2 0:00.54 reactor_2 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:46.048 16:42:22 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:46.049 16:42:22 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:46.049 16:42:22 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:46.318 [2024-07-11 16:42:23.031854] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:46.318 [2024-07-11 16:42:23.032518] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:26:46.318 [2024-07-11 16:42:23.032668] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:46.318 16:42:23 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:26:46.318 16:42:23 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 136592 0 00:26:46.318 16:42:23 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 136592 0 idle 00:26:46.318 16:42:23 -- interrupt/interrupt_common.sh@33 -- # local pid=136592 00:26:46.318 16:42:23 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:46.318 16:42:23 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:46.318 16:42:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:46.318 16:42:23 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:46.318 16:42:23 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:46.318 16:42:23 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:46.318 16:42:23 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:46.318 16:42:23 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 136592 -w 256 00:26:46.318 16:42:23 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:46.575 16:42:23 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 136592 root 20 0 20.1t 145740 28672 S 0.0 1.2 0:02.21 reactor_0' 00:26:46.575 16:42:23 -- interrupt/interrupt_common.sh@48 -- # echo 136592 root 20 0 20.1t 145740 28672 S 0.0 1.2 0:02.21 reactor_0 00:26:46.575 16:42:23 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:46.575 16:42:23 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:46.575 16:42:23 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:46.575 16:42:23 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:46.575 16:42:23 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:46.575 16:42:23 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:46.575 16:42:23 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:46.575 16:42:23 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:46.575 16:42:23 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:46.575 16:42:23 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:26:46.575 16:42:23 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:46.575 16:42:23 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 136592 00:26:46.575 16:42:23 -- common/autotest_common.sh@926 -- # '[' -z 136592 ']' 00:26:46.575 16:42:23 -- common/autotest_common.sh@930 -- # kill -0 136592 00:26:46.575 16:42:23 -- common/autotest_common.sh@931 -- # uname 00:26:46.575 16:42:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:46.575 16:42:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136592 00:26:46.575 16:42:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:46.575 16:42:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:46.575 16:42:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136592' 00:26:46.575 killing process with pid 136592 00:26:46.575 16:42:23 -- common/autotest_common.sh@945 -- # kill 136592 00:26:46.575 16:42:23 -- common/autotest_common.sh@950 -- # wait 136592 00:26:47.950 16:42:24 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:26:47.950 16:42:24 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:47.950 00:26:47.950 real 0m11.434s 00:26:47.950 user 0m11.724s 00:26:47.950 sys 0m1.606s 00:26:47.950 16:42:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.950 16:42:24 -- common/autotest_common.sh@10 -- # set +x 00:26:47.950 ************************************ 00:26:47.950 END TEST reactor_set_interrupt 00:26:47.950 ************************************ 00:26:47.950 16:42:24 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:47.950 16:42:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:47.950 16:42:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:47.950 16:42:24 -- common/autotest_common.sh@10 -- # set +x 00:26:47.950 ************************************ 00:26:47.950 START TEST reap_unregistered_poller 00:26:47.950 ************************************ 00:26:47.950 16:42:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:47.950 * Looking for test storage... 00:26:47.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:47.950 16:42:24 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:47.950 16:42:24 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:47.950 16:42:24 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:47.950 16:42:24 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:47.950 16:42:24 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:47.950 16:42:24 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:47.950 16:42:24 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:47.950 16:42:24 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:47.950 16:42:24 -- common/autotest_common.sh@34 -- # set -e 00:26:47.950 16:42:24 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:47.950 16:42:24 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:47.950 16:42:24 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:47.950 16:42:24 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:47.950 16:42:24 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:47.950 16:42:24 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:26:47.950 16:42:24 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:26:47.950 16:42:24 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:26:47.950 16:42:24 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:26:47.950 16:42:24 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:26:47.950 16:42:24 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:26:47.950 16:42:24 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:26:47.950 16:42:24 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:26:47.950 16:42:24 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:26:47.950 16:42:24 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:26:47.950 16:42:24 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:26:47.950 16:42:24 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:26:47.950 16:42:24 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:26:47.950 16:42:24 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:26:47.950 16:42:24 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:26:47.950 16:42:24 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:26:47.950 16:42:24 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:26:47.950 16:42:24 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:26:47.950 16:42:24 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:26:47.950 16:42:24 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:26:47.950 16:42:24 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:47.950 16:42:24 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:26:47.950 16:42:24 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:26:47.950 16:42:24 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:26:47.950 16:42:24 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:26:47.950 16:42:24 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:26:47.950 16:42:24 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:47.950 16:42:24 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:26:47.950 16:42:24 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:26:47.950 16:42:24 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:26:47.950 16:42:24 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:26:47.950 16:42:24 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:26:47.950 16:42:24 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:26:47.950 16:42:24 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:26:47.950 16:42:24 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:26:47.950 16:42:24 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:47.950 16:42:24 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:26:47.950 16:42:24 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:26:47.950 16:42:24 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:26:47.950 16:42:24 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:26:47.950 16:42:24 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:26:47.950 16:42:24 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:26:47.950 16:42:24 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:26:47.950 16:42:24 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:47.950 16:42:24 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:26:47.950 16:42:24 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:26:47.950 16:42:24 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:26:47.950 16:42:24 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:26:47.950 16:42:24 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:26:47.950 16:42:24 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:47.950 16:42:24 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:26:47.950 16:42:24 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:26:47.950 16:42:24 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:26:47.950 16:42:24 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:47.950 16:42:24 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:26:47.950 16:42:24 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:26:47.950 16:42:24 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:47.950 16:42:24 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:47.950 16:42:24 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:47.950 16:42:24 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:26:47.950 16:42:24 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:26:47.950 16:42:24 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:26:47.950 16:42:24 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:26:47.950 16:42:24 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:26:47.950 16:42:24 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:26:47.950 16:42:24 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:26:47.950 16:42:24 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:47.950 16:42:24 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:26:47.950 16:42:24 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:26:47.950 16:42:24 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:26:47.950 16:42:24 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:26:47.950 16:42:24 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:26:47.950 16:42:24 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:47.950 16:42:24 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:26:47.950 16:42:24 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:26:47.950 16:42:24 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:26:47.950 16:42:24 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:26:47.950 16:42:24 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:47.950 16:42:24 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:47.950 16:42:24 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:47.950 16:42:24 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:47.950 16:42:24 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:47.950 16:42:24 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:47.950 16:42:24 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:47.950 16:42:24 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:47.950 16:42:24 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:47.950 16:42:24 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:47.950 16:42:24 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:47.950 16:42:24 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:47.950 16:42:24 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:47.950 16:42:24 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:47.950 16:42:24 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:47.950 16:42:24 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:47.950 16:42:24 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:47.950 #define SPDK_CONFIG_H 00:26:47.950 #define SPDK_CONFIG_APPS 1 00:26:47.950 #define SPDK_CONFIG_ARCH native 00:26:47.950 #define SPDK_CONFIG_ASAN 1 00:26:47.950 #undef SPDK_CONFIG_AVAHI 00:26:47.950 #undef SPDK_CONFIG_CET 00:26:47.950 #define SPDK_CONFIG_COVERAGE 1 00:26:47.950 #define SPDK_CONFIG_CROSS_PREFIX 00:26:47.950 #undef SPDK_CONFIG_CRYPTO 00:26:47.950 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:47.950 #undef SPDK_CONFIG_CUSTOMOCF 00:26:47.950 #undef SPDK_CONFIG_DAOS 00:26:47.950 #define SPDK_CONFIG_DAOS_DIR 00:26:47.950 #define SPDK_CONFIG_DEBUG 1 00:26:47.950 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:47.950 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:47.950 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:47.950 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:47.950 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:47.950 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:47.950 #define SPDK_CONFIG_EXAMPLES 1 00:26:47.950 #undef SPDK_CONFIG_FC 00:26:47.950 #define SPDK_CONFIG_FC_PATH 00:26:47.950 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:47.950 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:47.950 #undef SPDK_CONFIG_FUSE 00:26:47.950 #undef SPDK_CONFIG_FUZZER 00:26:47.950 #define SPDK_CONFIG_FUZZER_LIB 00:26:47.950 #undef SPDK_CONFIG_GOLANG 00:26:47.950 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:47.950 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:47.950 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:47.950 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:47.950 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:47.950 #define SPDK_CONFIG_IDXD 1 00:26:47.950 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:47.950 #undef SPDK_CONFIG_IPSEC_MB 00:26:47.951 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:47.951 #define SPDK_CONFIG_ISAL 1 00:26:47.951 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:47.951 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:47.951 #define SPDK_CONFIG_LIBDIR 00:26:47.951 #undef SPDK_CONFIG_LTO 00:26:47.951 #define SPDK_CONFIG_MAX_LCORES 00:26:47.951 #define SPDK_CONFIG_NVME_CUSE 1 00:26:47.951 #undef SPDK_CONFIG_OCF 00:26:47.951 #define SPDK_CONFIG_OCF_PATH 00:26:47.951 #define SPDK_CONFIG_OPENSSL_PATH 00:26:47.951 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:47.951 #undef SPDK_CONFIG_PGO_USE 00:26:47.951 #define SPDK_CONFIG_PREFIX /usr/local 00:26:47.951 #define SPDK_CONFIG_RAID5F 1 00:26:47.951 #undef SPDK_CONFIG_RBD 00:26:47.951 #define SPDK_CONFIG_RDMA 1 00:26:47.951 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:47.951 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:47.951 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:47.951 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:47.951 #undef SPDK_CONFIG_SHARED 00:26:47.951 #undef SPDK_CONFIG_SMA 00:26:47.951 #define SPDK_CONFIG_TESTS 1 00:26:47.951 #undef SPDK_CONFIG_TSAN 00:26:47.951 #undef SPDK_CONFIG_UBLK 00:26:47.951 #define SPDK_CONFIG_UBSAN 1 00:26:47.951 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:47.951 #undef SPDK_CONFIG_URING 00:26:47.951 #define SPDK_CONFIG_URING_PATH 00:26:47.951 #undef SPDK_CONFIG_URING_ZNS 00:26:47.951 #undef SPDK_CONFIG_USDT 00:26:47.951 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:47.951 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:47.951 #undef SPDK_CONFIG_VFIO_USER 00:26:47.951 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:47.951 #define SPDK_CONFIG_VHOST 1 00:26:47.951 #define SPDK_CONFIG_VIRTIO 1 00:26:47.951 #undef SPDK_CONFIG_VTUNE 00:26:47.951 #define SPDK_CONFIG_VTUNE_DIR 00:26:47.951 #define SPDK_CONFIG_WERROR 1 00:26:47.951 #define SPDK_CONFIG_WPDK_DIR 00:26:47.951 #undef SPDK_CONFIG_XNVME 00:26:47.951 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:47.951 16:42:24 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:47.951 16:42:24 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:47.951 16:42:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.951 16:42:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.951 16:42:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.951 16:42:24 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:47.951 16:42:24 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:47.951 16:42:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:47.951 16:42:24 -- paths/export.sh@5 -- # export PATH 00:26:47.951 16:42:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:47.951 16:42:24 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:47.951 16:42:24 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:47.951 16:42:24 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:47.951 16:42:24 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:47.951 16:42:24 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:47.951 16:42:24 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:47.951 16:42:24 -- pm/common@16 -- # TEST_TAG=N/A 00:26:47.951 16:42:24 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:47.951 16:42:24 -- common/autotest_common.sh@52 -- # : 1 00:26:47.951 16:42:24 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:47.951 16:42:24 -- common/autotest_common.sh@56 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:47.951 16:42:24 -- common/autotest_common.sh@58 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:47.951 16:42:24 -- common/autotest_common.sh@60 -- # : 1 00:26:47.951 16:42:24 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:47.951 16:42:24 -- common/autotest_common.sh@62 -- # : 1 00:26:47.951 16:42:24 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:47.951 16:42:24 -- common/autotest_common.sh@64 -- # : 00:26:47.951 16:42:24 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:47.951 16:42:24 -- common/autotest_common.sh@66 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:47.951 16:42:24 -- common/autotest_common.sh@68 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:47.951 16:42:24 -- common/autotest_common.sh@70 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:47.951 16:42:24 -- common/autotest_common.sh@72 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:47.951 16:42:24 -- common/autotest_common.sh@74 -- # : 1 00:26:47.951 16:42:24 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:47.951 16:42:24 -- common/autotest_common.sh@76 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:47.951 16:42:24 -- common/autotest_common.sh@78 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:47.951 16:42:24 -- common/autotest_common.sh@80 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:47.951 16:42:24 -- common/autotest_common.sh@82 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:47.951 16:42:24 -- common/autotest_common.sh@84 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:47.951 16:42:24 -- common/autotest_common.sh@86 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:47.951 16:42:24 -- common/autotest_common.sh@88 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:47.951 16:42:24 -- common/autotest_common.sh@90 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:47.951 16:42:24 -- common/autotest_common.sh@92 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:47.951 16:42:24 -- common/autotest_common.sh@94 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:47.951 16:42:24 -- common/autotest_common.sh@96 -- # : rdma 00:26:47.951 16:42:24 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:47.951 16:42:24 -- common/autotest_common.sh@98 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:47.951 16:42:24 -- common/autotest_common.sh@100 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:47.951 16:42:24 -- common/autotest_common.sh@102 -- # : 1 00:26:47.951 16:42:24 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:47.951 16:42:24 -- common/autotest_common.sh@104 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:47.951 16:42:24 -- common/autotest_common.sh@106 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:47.951 16:42:24 -- common/autotest_common.sh@108 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:47.951 16:42:24 -- common/autotest_common.sh@110 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:47.951 16:42:24 -- common/autotest_common.sh@112 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:47.951 16:42:24 -- common/autotest_common.sh@114 -- # : 1 00:26:47.951 16:42:24 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:47.951 16:42:24 -- common/autotest_common.sh@116 -- # : 1 00:26:47.951 16:42:24 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:47.951 16:42:24 -- common/autotest_common.sh@118 -- # : 00:26:47.951 16:42:24 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:47.951 16:42:24 -- common/autotest_common.sh@120 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:47.951 16:42:24 -- common/autotest_common.sh@122 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:47.951 16:42:24 -- common/autotest_common.sh@124 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:47.951 16:42:24 -- common/autotest_common.sh@126 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:47.951 16:42:24 -- common/autotest_common.sh@128 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:47.951 16:42:24 -- common/autotest_common.sh@130 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:47.951 16:42:24 -- common/autotest_common.sh@132 -- # : 00:26:47.951 16:42:24 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:47.951 16:42:24 -- common/autotest_common.sh@134 -- # : true 00:26:47.951 16:42:24 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:47.951 16:42:24 -- common/autotest_common.sh@136 -- # : 1 00:26:47.951 16:42:24 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:47.951 16:42:24 -- common/autotest_common.sh@138 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:47.951 16:42:24 -- common/autotest_common.sh@140 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:47.951 16:42:24 -- common/autotest_common.sh@142 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:47.951 16:42:24 -- common/autotest_common.sh@144 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:47.951 16:42:24 -- common/autotest_common.sh@146 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:47.951 16:42:24 -- common/autotest_common.sh@148 -- # : 00:26:47.951 16:42:24 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:47.951 16:42:24 -- common/autotest_common.sh@150 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:47.951 16:42:24 -- common/autotest_common.sh@152 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:47.951 16:42:24 -- common/autotest_common.sh@154 -- # : 0 00:26:47.951 16:42:24 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:47.952 16:42:24 -- common/autotest_common.sh@156 -- # : 0 00:26:47.952 16:42:24 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:47.952 16:42:24 -- common/autotest_common.sh@158 -- # : 0 00:26:47.952 16:42:24 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:47.952 16:42:24 -- common/autotest_common.sh@160 -- # : 0 00:26:47.952 16:42:24 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:26:47.952 16:42:24 -- common/autotest_common.sh@163 -- # : 00:26:47.952 16:42:24 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:26:47.952 16:42:24 -- common/autotest_common.sh@165 -- # : 0 00:26:47.952 16:42:24 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:26:47.952 16:42:24 -- common/autotest_common.sh@167 -- # : 0 00:26:47.952 16:42:24 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:47.952 16:42:24 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:47.952 16:42:24 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:47.952 16:42:24 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:47.952 16:42:24 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:47.952 16:42:24 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:47.952 16:42:24 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:47.952 16:42:24 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:47.952 16:42:24 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:47.952 16:42:24 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:47.952 16:42:24 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:47.952 16:42:24 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:47.952 16:42:24 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:47.952 16:42:24 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:47.952 16:42:24 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:26:47.952 16:42:24 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:47.952 16:42:24 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:47.952 16:42:24 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:47.952 16:42:24 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:47.952 16:42:24 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:47.952 16:42:24 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:26:47.952 16:42:24 -- common/autotest_common.sh@196 -- # cat 00:26:47.952 16:42:24 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:26:47.952 16:42:24 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:47.952 16:42:24 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:47.952 16:42:24 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:47.952 16:42:24 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:47.952 16:42:24 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:26:47.952 16:42:24 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:26:47.952 16:42:24 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:47.952 16:42:24 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:47.952 16:42:24 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:47.952 16:42:24 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:47.952 16:42:24 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:26:47.952 16:42:24 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:26:47.952 16:42:24 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:47.952 16:42:24 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:47.952 16:42:24 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:47.952 16:42:24 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:47.952 16:42:24 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:47.952 16:42:24 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:47.952 16:42:24 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:26:47.952 16:42:24 -- common/autotest_common.sh@249 -- # export valgrind= 00:26:47.952 16:42:24 -- common/autotest_common.sh@249 -- # valgrind= 00:26:47.952 16:42:24 -- common/autotest_common.sh@255 -- # uname -s 00:26:47.952 16:42:24 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:26:47.952 16:42:24 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:26:47.952 16:42:24 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:26:47.952 16:42:24 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:26:47.952 16:42:24 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:47.952 16:42:24 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:26:47.952 16:42:24 -- common/autotest_common.sh@265 -- # MAKE=make 00:26:47.952 16:42:24 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:26:47.952 16:42:24 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:26:47.952 16:42:24 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:26:47.952 16:42:24 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:47.952 16:42:24 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:26:47.952 16:42:24 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:26:47.952 16:42:24 -- common/autotest_common.sh@309 -- # [[ -z 136780 ]] 00:26:47.952 16:42:24 -- common/autotest_common.sh@309 -- # kill -0 136780 00:26:47.952 16:42:24 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:26:47.952 16:42:24 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:26:47.952 16:42:24 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:26:47.952 16:42:24 -- common/autotest_common.sh@322 -- # local mount target_dir 00:26:47.952 16:42:24 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:26:47.952 16:42:24 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:26:47.952 16:42:24 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:26:47.952 16:42:24 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:26:47.952 16:42:24 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.94O0Fu 00:26:47.952 16:42:24 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:47.952 16:42:24 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:26:47.952 16:42:24 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:26:47.952 16:42:24 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.94O0Fu/tests/interrupt /tmp/spdk.94O0Fu 00:26:47.952 16:42:24 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:26:47.952 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.952 16:42:24 -- common/autotest_common.sh@318 -- # df -T 00:26:47.952 16:42:24 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=udev 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=6224461824 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6224461824 00:26:47.952 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:47.952 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=1249763328 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254514688 00:26:47.952 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=4751360 00:26:47.952 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda1 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=10616164352 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20616794112 00:26:47.952 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=9983852544 00:26:47.952 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=6269968384 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272561152 00:26:47.952 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:26:47.952 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=5242880 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5242880 00:26:47.952 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:47.952 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=6272561152 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6272561152 00:26:47.952 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:47.952 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda15 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=103089152 00:26:47.952 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=109422592 00:26:47.952 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=6334464 00:26:47.952 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.952 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop0 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:47.953 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:47.953 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop2 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=41025536 00:26:47.953 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=41025536 00:26:47.953 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop1 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=96337920 00:26:47.953 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=96337920 00:26:47.953 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=1254510592 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1254510592 00:26:47.953 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:26:47.953 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=97899794432 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:26:47.953 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=1802985472 00:26:47.953 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop3 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=40763392 00:26:47.953 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=40763392 00:26:47.953 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/loop4 00:26:47.953 16:42:24 -- common/autotest_common.sh@352 -- # fss["$mount"]=squashfs 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # avails["$mount"]=0 00:26:47.953 16:42:24 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:26:47.953 16:42:24 -- common/autotest_common.sh@354 -- # uses["$mount"]=67108864 00:26:47.953 16:42:24 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:26:47.953 16:42:24 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:26:47.953 * Looking for test storage... 00:26:47.953 16:42:24 -- common/autotest_common.sh@359 -- # local target_space new_size 00:26:47.953 16:42:24 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:26:47.953 16:42:24 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:47.953 16:42:24 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:47.953 16:42:24 -- common/autotest_common.sh@363 -- # mount=/ 00:26:47.953 16:42:24 -- common/autotest_common.sh@365 -- # target_space=10616164352 00:26:47.953 16:42:24 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:26:47.953 16:42:24 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:26:47.953 16:42:24 -- common/autotest_common.sh@371 -- # [[ ext4 == tmpfs ]] 00:26:47.953 16:42:24 -- common/autotest_common.sh@371 -- # [[ ext4 == ramfs ]] 00:26:47.953 16:42:24 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:26:47.953 16:42:24 -- common/autotest_common.sh@372 -- # new_size=12198445056 00:26:47.953 16:42:24 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:47.953 16:42:24 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:47.953 16:42:24 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:47.953 16:42:24 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:47.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:47.953 16:42:24 -- common/autotest_common.sh@380 -- # return 0 00:26:47.953 16:42:24 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:26:47.953 16:42:24 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:26:47.953 16:42:24 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:47.953 16:42:24 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:47.953 16:42:24 -- common/autotest_common.sh@1672 -- # true 00:26:47.953 16:42:24 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:26:47.953 16:42:24 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:47.953 16:42:24 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:47.953 16:42:24 -- common/autotest_common.sh@27 -- # exec 00:26:47.953 16:42:24 -- common/autotest_common.sh@29 -- # exec 00:26:47.953 16:42:24 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:47.953 16:42:24 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:47.953 16:42:24 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:47.953 16:42:24 -- common/autotest_common.sh@18 -- # set -x 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:47.953 16:42:24 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:47.953 16:42:24 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:47.953 16:42:24 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=136829 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:47.953 16:42:24 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 136829 /var/tmp/spdk.sock 00:26:47.953 16:42:24 -- common/autotest_common.sh@819 -- # '[' -z 136829 ']' 00:26:47.953 16:42:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.953 16:42:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:47.953 16:42:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.953 16:42:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:47.953 16:42:24 -- common/autotest_common.sh@10 -- # set +x 00:26:47.953 [2024-07-11 16:42:24.701899] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:47.953 [2024-07-11 16:42:24.702850] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136829 ] 00:26:48.211 [2024-07-11 16:42:24.878082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:48.469 [2024-07-11 16:42:25.065621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.469 [2024-07-11 16:42:25.065761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.469 [2024-07-11 16:42:25.065758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.749 [2024-07-11 16:42:25.317869] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:49.026 16:42:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:49.026 16:42:25 -- common/autotest_common.sh@852 -- # return 0 00:26:49.026 16:42:25 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:26:49.026 16:42:25 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:26:49.026 16:42:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:49.026 16:42:25 -- common/autotest_common.sh@10 -- # set +x 00:26:49.026 16:42:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:49.026 16:42:25 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:26:49.026 "name": "app_thread", 00:26:49.026 "id": 1, 00:26:49.026 "active_pollers": [], 00:26:49.026 "timed_pollers": [ 00:26:49.026 { 00:26:49.026 "name": "rpc_subsystem_poll", 00:26:49.026 "id": 1, 00:26:49.026 "state": "waiting", 00:26:49.026 "run_count": 0, 00:26:49.026 "busy_count": 0, 00:26:49.026 "period_ticks": 8800000 00:26:49.026 } 00:26:49.026 ], 00:26:49.026 "paused_pollers": [] 00:26:49.026 }' 00:26:49.026 16:42:25 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:26:49.026 16:42:25 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:26:49.026 16:42:25 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:26:49.026 16:42:25 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:26:49.284 16:42:25 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:26:49.284 16:42:25 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:26:49.284 16:42:25 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:49.284 16:42:25 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:49.284 16:42:25 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:49.284 5000+0 records in 00:26:49.284 5000+0 records out 00:26:49.284 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0236533 s, 433 MB/s 00:26:49.284 16:42:25 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:49.284 AIO0 00:26:49.284 16:42:26 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:49.542 16:42:26 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:26:49.799 16:42:26 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:26:49.799 16:42:26 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:26:49.799 16:42:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:49.799 16:42:26 -- common/autotest_common.sh@10 -- # set +x 00:26:49.799 16:42:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:49.799 16:42:26 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:26:49.799 "name": "app_thread", 00:26:49.799 "id": 1, 00:26:49.799 "active_pollers": [], 00:26:49.799 "timed_pollers": [ 00:26:49.799 { 00:26:49.799 "name": "rpc_subsystem_poll", 00:26:49.799 "id": 1, 00:26:49.799 "state": "waiting", 00:26:49.799 "run_count": 0, 00:26:49.799 "busy_count": 0, 00:26:49.799 "period_ticks": 8800000 00:26:49.799 } 00:26:49.799 ], 00:26:49.799 "paused_pollers": [] 00:26:49.799 }' 00:26:49.799 16:42:26 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:26:49.799 16:42:26 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:26:49.799 16:42:26 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:26:49.799 16:42:26 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:26:50.057 16:42:26 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:26:50.057 16:42:26 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:26:50.057 16:42:26 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:26:50.057 16:42:26 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 136829 00:26:50.057 16:42:26 -- common/autotest_common.sh@926 -- # '[' -z 136829 ']' 00:26:50.057 16:42:26 -- common/autotest_common.sh@930 -- # kill -0 136829 00:26:50.057 16:42:26 -- common/autotest_common.sh@931 -- # uname 00:26:50.057 16:42:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:50.057 16:42:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 136829 00:26:50.057 killing process with pid 136829 00:26:50.057 16:42:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:50.057 16:42:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:50.057 16:42:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 136829' 00:26:50.057 16:42:26 -- common/autotest_common.sh@945 -- # kill 136829 00:26:50.057 16:42:26 -- common/autotest_common.sh@950 -- # wait 136829 00:26:50.991 16:42:27 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:26:50.991 16:42:27 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:50.991 ************************************ 00:26:50.991 END TEST reap_unregistered_poller 00:26:50.991 ************************************ 00:26:50.991 00:26:50.991 real 0m3.175s 00:26:50.991 user 0m2.668s 00:26:50.991 sys 0m0.444s 00:26:50.991 16:42:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.991 16:42:27 -- common/autotest_common.sh@10 -- # set +x 00:26:50.991 16:42:27 -- spdk/autotest.sh@204 -- # uname -s 00:26:50.991 16:42:27 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:26:50.991 16:42:27 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:26:50.991 16:42:27 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:26:50.991 16:42:27 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:50.991 16:42:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:50.991 16:42:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:50.991 16:42:27 -- common/autotest_common.sh@10 -- # set +x 00:26:50.991 ************************************ 00:26:50.991 START TEST spdk_dd 00:26:50.991 ************************************ 00:26:50.991 16:42:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:50.991 * Looking for test storage... 00:26:50.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:50.991 16:42:27 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:50.991 16:42:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.991 16:42:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.991 16:42:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.991 16:42:27 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:50.991 16:42:27 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:50.991 16:42:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:50.991 16:42:27 -- paths/export.sh@5 -- # export PATH 00:26:50.991 16:42:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:50.991 16:42:27 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:51.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:26:51.507 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:52.439 16:42:29 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:26:52.439 16:42:29 -- dd/dd.sh@11 -- # nvme_in_userspace 00:26:52.440 16:42:29 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:52.440 16:42:29 -- scripts/common.sh@312 -- # local nvmes 00:26:52.440 16:42:29 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:52.440 16:42:29 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:52.440 16:42:29 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:52.440 16:42:29 -- scripts/common.sh@297 -- # local bdf= 00:26:52.440 16:42:29 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:52.440 16:42:29 -- scripts/common.sh@232 -- # local class 00:26:52.440 16:42:29 -- scripts/common.sh@233 -- # local subclass 00:26:52.440 16:42:29 -- scripts/common.sh@234 -- # local progif 00:26:52.440 16:42:29 -- scripts/common.sh@235 -- # printf %02x 1 00:26:52.440 16:42:29 -- scripts/common.sh@235 -- # class=01 00:26:52.440 16:42:29 -- scripts/common.sh@236 -- # printf %02x 8 00:26:52.440 16:42:29 -- scripts/common.sh@236 -- # subclass=08 00:26:52.440 16:42:29 -- scripts/common.sh@237 -- # printf %02x 2 00:26:52.440 16:42:29 -- scripts/common.sh@237 -- # progif=02 00:26:52.440 16:42:29 -- scripts/common.sh@239 -- # hash lspci 00:26:52.440 16:42:29 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:52.440 16:42:29 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:52.440 16:42:29 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:52.440 16:42:29 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:52.440 16:42:29 -- scripts/common.sh@244 -- # tr -d '"' 00:26:52.440 16:42:29 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:52.440 16:42:29 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:52.440 16:42:29 -- scripts/common.sh@15 -- # local i 00:26:52.440 16:42:29 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:52.440 16:42:29 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:52.440 16:42:29 -- scripts/common.sh@24 -- # return 0 00:26:52.440 16:42:29 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:52.440 16:42:29 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:52.440 16:42:29 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:52.440 16:42:29 -- scripts/common.sh@322 -- # uname -s 00:26:52.440 16:42:29 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:52.440 16:42:29 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:52.440 16:42:29 -- scripts/common.sh@327 -- # (( 1 )) 00:26:52.440 16:42:29 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:26:52.440 16:42:29 -- dd/dd.sh@13 -- # check_liburing 00:26:52.440 16:42:29 -- dd/common.sh@139 -- # local lib so 00:26:52.440 16:42:29 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:26:52.440 16:42:29 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:26:52.440 16:42:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:52.440 16:42:29 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:26:52.440 16:42:29 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:52.440 16:42:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:52.440 16:42:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:52.440 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:26:52.440 ************************************ 00:26:52.440 START TEST spdk_dd_basic_rw 00:26:52.440 ************************************ 00:26:52.440 16:42:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:52.440 * Looking for test storage... 00:26:52.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:52.440 16:42:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:52.440 16:42:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.440 16:42:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.440 16:42:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.440 16:42:29 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:52.440 16:42:29 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:52.440 16:42:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:52.440 16:42:29 -- paths/export.sh@5 -- # export PATH 00:26:52.440 16:42:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:52.440 16:42:29 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:26:52.440 16:42:29 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:26:52.440 16:42:29 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:26:52.440 16:42:29 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:26:52.440 16:42:29 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:26:52.440 16:42:29 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:26:52.440 16:42:29 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:26:52.440 16:42:29 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:52.440 16:42:29 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:52.440 16:42:29 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:26:52.440 16:42:29 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:26:52.440 16:42:29 -- dd/common.sh@126 -- # mapfile -t id 00:26:52.440 16:42:29 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:26:52.701 16:42:29 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 7 Host Read Commands: 2113 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:26:52.701 16:42:29 -- dd/common.sh@130 -- # lbaf=04 00:26:52.702 16:42:29 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 96 Data Units Written: 7 Host Read Commands: 2113 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:26:52.702 16:42:29 -- dd/common.sh@132 -- # lbaf=4096 00:26:52.702 16:42:29 -- dd/common.sh@134 -- # echo 4096 00:26:52.702 16:42:29 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:26:52.702 16:42:29 -- dd/basic_rw.sh@96 -- # : 00:26:52.702 16:42:29 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:52.702 16:42:29 -- dd/basic_rw.sh@96 -- # gen_conf 00:26:52.702 16:42:29 -- dd/common.sh@31 -- # xtrace_disable 00:26:52.702 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:26:52.702 16:42:29 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:26:52.702 16:42:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:52.702 16:42:29 -- common/autotest_common.sh@10 -- # set +x 00:26:52.702 ************************************ 00:26:52.702 START TEST dd_bs_lt_native_bs 00:26:52.702 ************************************ 00:26:52.702 16:42:29 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:52.702 16:42:29 -- common/autotest_common.sh@640 -- # local es=0 00:26:52.702 16:42:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:52.702 16:42:29 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:52.702 16:42:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:52.702 16:42:29 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:52.702 16:42:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:52.702 16:42:29 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:52.702 16:42:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:26:52.702 16:42:29 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:52.702 16:42:29 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:52.702 16:42:29 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:52.961 { 00:26:52.961 "subsystems": [ 00:26:52.961 { 00:26:52.961 "subsystem": "bdev", 00:26:52.961 "config": [ 00:26:52.961 { 00:26:52.961 "params": { 00:26:52.961 "trtype": "pcie", 00:26:52.961 "traddr": "0000:00:06.0", 00:26:52.961 "name": "Nvme0" 00:26:52.961 }, 00:26:52.961 "method": "bdev_nvme_attach_controller" 00:26:52.961 }, 00:26:52.961 { 00:26:52.961 "method": "bdev_wait_for_examine" 00:26:52.961 } 00:26:52.961 ] 00:26:52.961 } 00:26:52.961 ] 00:26:52.961 } 00:26:52.961 [2024-07-11 16:42:29.549080] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:52.961 [2024-07-11 16:42:29.549414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137141 ] 00:26:52.961 [2024-07-11 16:42:29.702949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.218 [2024-07-11 16:42:29.925431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.476 [2024-07-11 16:42:30.269725] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:26:53.476 [2024-07-11 16:42:30.269830] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:54.041 [2024-07-11 16:42:30.845974] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:54.605 ************************************ 00:26:54.605 END TEST dd_bs_lt_native_bs 00:26:54.605 ************************************ 00:26:54.605 16:42:31 -- common/autotest_common.sh@643 -- # es=234 00:26:54.605 16:42:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:26:54.605 16:42:31 -- common/autotest_common.sh@652 -- # es=106 00:26:54.605 16:42:31 -- common/autotest_common.sh@653 -- # case "$es" in 00:26:54.605 16:42:31 -- common/autotest_common.sh@660 -- # es=1 00:26:54.605 16:42:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:26:54.605 00:26:54.605 real 0m1.680s 00:26:54.605 user 0m1.415s 00:26:54.605 sys 0m0.228s 00:26:54.605 16:42:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.605 16:42:31 -- common/autotest_common.sh@10 -- # set +x 00:26:54.605 16:42:31 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:26:54.605 16:42:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:54.605 16:42:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:54.605 16:42:31 -- common/autotest_common.sh@10 -- # set +x 00:26:54.605 ************************************ 00:26:54.605 START TEST dd_rw 00:26:54.605 ************************************ 00:26:54.605 16:42:31 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:26:54.605 16:42:31 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:26:54.605 16:42:31 -- dd/basic_rw.sh@12 -- # local count size 00:26:54.605 16:42:31 -- dd/basic_rw.sh@13 -- # local qds bss 00:26:54.605 16:42:31 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:26:54.605 16:42:31 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:54.605 16:42:31 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:54.605 16:42:31 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:54.605 16:42:31 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:54.605 16:42:31 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:54.605 16:42:31 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:54.605 16:42:31 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:54.605 16:42:31 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:54.605 16:42:31 -- dd/basic_rw.sh@23 -- # count=15 00:26:54.605 16:42:31 -- dd/basic_rw.sh@24 -- # count=15 00:26:54.605 16:42:31 -- dd/basic_rw.sh@25 -- # size=61440 00:26:54.605 16:42:31 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:54.605 16:42:31 -- dd/common.sh@98 -- # xtrace_disable 00:26:54.605 16:42:31 -- common/autotest_common.sh@10 -- # set +x 00:26:55.171 16:42:31 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:26:55.171 16:42:31 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:55.171 16:42:31 -- dd/common.sh@31 -- # xtrace_disable 00:26:55.171 16:42:31 -- common/autotest_common.sh@10 -- # set +x 00:26:55.171 { 00:26:55.171 "subsystems": [ 00:26:55.171 { 00:26:55.171 "subsystem": "bdev", 00:26:55.171 "config": [ 00:26:55.171 { 00:26:55.171 "params": { 00:26:55.171 "trtype": "pcie", 00:26:55.171 "traddr": "0000:00:06.0", 00:26:55.171 "name": "Nvme0" 00:26:55.171 }, 00:26:55.171 "method": "bdev_nvme_attach_controller" 00:26:55.171 }, 00:26:55.171 { 00:26:55.171 "method": "bdev_wait_for_examine" 00:26:55.171 } 00:26:55.171 ] 00:26:55.171 } 00:26:55.171 ] 00:26:55.171 } 00:26:55.171 [2024-07-11 16:42:31.858579] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:55.171 [2024-07-11 16:42:31.858777] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137209 ] 00:26:55.429 [2024-07-11 16:42:32.025806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.429 [2024-07-11 16:42:32.178835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.629  Copying: 60/60 [kB] (average 19 MBps) 00:26:56.629 00:26:56.630 16:42:33 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:26:56.630 16:42:33 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:56.630 16:42:33 -- dd/common.sh@31 -- # xtrace_disable 00:26:56.630 16:42:33 -- common/autotest_common.sh@10 -- # set +x 00:26:56.630 [2024-07-11 16:42:33.431104] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:56.630 [2024-07-11 16:42:33.431295] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137229 ] 00:26:56.630 { 00:26:56.630 "subsystems": [ 00:26:56.630 { 00:26:56.630 "subsystem": "bdev", 00:26:56.630 "config": [ 00:26:56.630 { 00:26:56.630 "params": { 00:26:56.630 "trtype": "pcie", 00:26:56.630 "traddr": "0000:00:06.0", 00:26:56.630 "name": "Nvme0" 00:26:56.630 }, 00:26:56.630 "method": "bdev_nvme_attach_controller" 00:26:56.630 }, 00:26:56.630 { 00:26:56.630 "method": "bdev_wait_for_examine" 00:26:56.630 } 00:26:56.630 ] 00:26:56.630 } 00:26:56.630 ] 00:26:56.630 } 00:26:56.887 [2024-07-11 16:42:33.598430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.144 [2024-07-11 16:42:33.776537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.337  Copying: 60/60 [kB] (average 19 MBps) 00:26:58.337 00:26:58.337 16:42:35 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:58.338 16:42:35 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:58.338 16:42:35 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:58.338 16:42:35 -- dd/common.sh@11 -- # local nvme_ref= 00:26:58.338 16:42:35 -- dd/common.sh@12 -- # local size=61440 00:26:58.338 16:42:35 -- dd/common.sh@14 -- # local bs=1048576 00:26:58.338 16:42:35 -- dd/common.sh@15 -- # local count=1 00:26:58.338 16:42:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:58.338 16:42:35 -- dd/common.sh@18 -- # gen_conf 00:26:58.338 16:42:35 -- dd/common.sh@31 -- # xtrace_disable 00:26:58.338 16:42:35 -- common/autotest_common.sh@10 -- # set +x 00:26:58.595 [2024-07-11 16:42:35.159514] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:58.595 [2024-07-11 16:42:35.159722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137261 ] 00:26:58.595 { 00:26:58.595 "subsystems": [ 00:26:58.595 { 00:26:58.595 "subsystem": "bdev", 00:26:58.595 "config": [ 00:26:58.595 { 00:26:58.595 "params": { 00:26:58.595 "trtype": "pcie", 00:26:58.595 "traddr": "0000:00:06.0", 00:26:58.595 "name": "Nvme0" 00:26:58.595 }, 00:26:58.595 "method": "bdev_nvme_attach_controller" 00:26:58.596 }, 00:26:58.596 { 00:26:58.596 "method": "bdev_wait_for_examine" 00:26:58.596 } 00:26:58.596 ] 00:26:58.596 } 00:26:58.596 ] 00:26:58.596 } 00:26:58.596 [2024-07-11 16:42:35.327016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.854 [2024-07-11 16:42:35.500641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.045  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:00.045 00:27:00.045 16:42:36 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:00.045 16:42:36 -- dd/basic_rw.sh@23 -- # count=15 00:27:00.045 16:42:36 -- dd/basic_rw.sh@24 -- # count=15 00:27:00.045 16:42:36 -- dd/basic_rw.sh@25 -- # size=61440 00:27:00.045 16:42:36 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:27:00.045 16:42:36 -- dd/common.sh@98 -- # xtrace_disable 00:27:00.046 16:42:36 -- common/autotest_common.sh@10 -- # set +x 00:27:00.611 16:42:37 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:27:00.611 16:42:37 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:00.611 16:42:37 -- dd/common.sh@31 -- # xtrace_disable 00:27:00.611 16:42:37 -- common/autotest_common.sh@10 -- # set +x 00:27:00.611 [2024-07-11 16:42:37.279394] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:00.611 [2024-07-11 16:42:37.279588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137289 ] 00:27:00.611 { 00:27:00.611 "subsystems": [ 00:27:00.611 { 00:27:00.611 "subsystem": "bdev", 00:27:00.611 "config": [ 00:27:00.611 { 00:27:00.611 "params": { 00:27:00.611 "trtype": "pcie", 00:27:00.611 "traddr": "0000:00:06.0", 00:27:00.611 "name": "Nvme0" 00:27:00.611 }, 00:27:00.611 "method": "bdev_nvme_attach_controller" 00:27:00.611 }, 00:27:00.611 { 00:27:00.611 "method": "bdev_wait_for_examine" 00:27:00.611 } 00:27:00.611 ] 00:27:00.611 } 00:27:00.611 ] 00:27:00.612 } 00:27:00.870 [2024-07-11 16:42:37.446109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.870 [2024-07-11 16:42:37.618344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.383  Copying: 60/60 [kB] (average 58 MBps) 00:27:02.383 00:27:02.383 16:42:38 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:27:02.383 16:42:38 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:02.383 16:42:38 -- dd/common.sh@31 -- # xtrace_disable 00:27:02.383 16:42:38 -- common/autotest_common.sh@10 -- # set +x 00:27:02.383 { 00:27:02.383 "subsystems": [ 00:27:02.383 { 00:27:02.383 "subsystem": "bdev", 00:27:02.383 "config": [ 00:27:02.383 { 00:27:02.383 "params": { 00:27:02.383 "trtype": "pcie", 00:27:02.383 "traddr": "0000:00:06.0", 00:27:02.383 "name": "Nvme0" 00:27:02.383 }, 00:27:02.383 "method": "bdev_nvme_attach_controller" 00:27:02.383 }, 00:27:02.383 { 00:27:02.383 "method": "bdev_wait_for_examine" 00:27:02.383 } 00:27:02.383 ] 00:27:02.383 } 00:27:02.383 ] 00:27:02.383 } 00:27:02.383 [2024-07-11 16:42:38.942043] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:02.383 [2024-07-11 16:42:38.942233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137320 ] 00:27:02.383 [2024-07-11 16:42:39.104421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.678 [2024-07-11 16:42:39.261298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.882  Copying: 60/60 [kB] (average 58 MBps) 00:27:03.882 00:27:03.882 16:42:40 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:03.882 16:42:40 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:27:03.882 16:42:40 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:03.882 16:42:40 -- dd/common.sh@11 -- # local nvme_ref= 00:27:03.882 16:42:40 -- dd/common.sh@12 -- # local size=61440 00:27:03.882 16:42:40 -- dd/common.sh@14 -- # local bs=1048576 00:27:03.882 16:42:40 -- dd/common.sh@15 -- # local count=1 00:27:03.882 16:42:40 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:03.882 16:42:40 -- dd/common.sh@18 -- # gen_conf 00:27:03.882 16:42:40 -- dd/common.sh@31 -- # xtrace_disable 00:27:03.882 16:42:40 -- common/autotest_common.sh@10 -- # set +x 00:27:03.882 [2024-07-11 16:42:40.562888] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:03.882 [2024-07-11 16:42:40.563089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137362 ] 00:27:03.882 { 00:27:03.882 "subsystems": [ 00:27:03.882 { 00:27:03.882 "subsystem": "bdev", 00:27:03.882 "config": [ 00:27:03.882 { 00:27:03.882 "params": { 00:27:03.882 "trtype": "pcie", 00:27:03.882 "traddr": "0000:00:06.0", 00:27:03.882 "name": "Nvme0" 00:27:03.882 }, 00:27:03.882 "method": "bdev_nvme_attach_controller" 00:27:03.882 }, 00:27:03.882 { 00:27:03.882 "method": "bdev_wait_for_examine" 00:27:03.882 } 00:27:03.882 ] 00:27:03.882 } 00:27:03.882 ] 00:27:03.882 } 00:27:04.140 [2024-07-11 16:42:40.729759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.140 [2024-07-11 16:42:40.889768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.774  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:05.774 00:27:05.774 16:42:42 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:05.774 16:42:42 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:05.774 16:42:42 -- dd/basic_rw.sh@23 -- # count=7 00:27:05.774 16:42:42 -- dd/basic_rw.sh@24 -- # count=7 00:27:05.774 16:42:42 -- dd/basic_rw.sh@25 -- # size=57344 00:27:05.774 16:42:42 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:05.774 16:42:42 -- dd/common.sh@98 -- # xtrace_disable 00:27:05.774 16:42:42 -- common/autotest_common.sh@10 -- # set +x 00:27:06.030 16:42:42 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:27:06.030 16:42:42 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:06.030 16:42:42 -- dd/common.sh@31 -- # xtrace_disable 00:27:06.030 16:42:42 -- common/autotest_common.sh@10 -- # set +x 00:27:06.030 { 00:27:06.030 "subsystems": [ 00:27:06.030 { 00:27:06.030 "subsystem": "bdev", 00:27:06.030 "config": [ 00:27:06.030 { 00:27:06.030 "params": { 00:27:06.030 "trtype": "pcie", 00:27:06.030 "traddr": "0000:00:06.0", 00:27:06.030 "name": "Nvme0" 00:27:06.030 }, 00:27:06.030 "method": "bdev_nvme_attach_controller" 00:27:06.030 }, 00:27:06.030 { 00:27:06.030 "method": "bdev_wait_for_examine" 00:27:06.030 } 00:27:06.030 ] 00:27:06.030 } 00:27:06.030 ] 00:27:06.030 } 00:27:06.030 [2024-07-11 16:42:42.731801] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:06.030 [2024-07-11 16:42:42.731992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137393 ] 00:27:06.288 [2024-07-11 16:42:42.893109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.288 [2024-07-11 16:42:43.047001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.435  Copying: 56/56 [kB] (average 27 MBps) 00:27:07.435 00:27:07.435 16:42:44 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:27:07.435 16:42:44 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:07.435 16:42:44 -- dd/common.sh@31 -- # xtrace_disable 00:27:07.435 16:42:44 -- common/autotest_common.sh@10 -- # set +x 00:27:07.693 [2024-07-11 16:42:44.294225] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:07.693 [2024-07-11 16:42:44.294417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137421 ] 00:27:07.693 { 00:27:07.693 "subsystems": [ 00:27:07.693 { 00:27:07.693 "subsystem": "bdev", 00:27:07.693 "config": [ 00:27:07.693 { 00:27:07.693 "params": { 00:27:07.693 "trtype": "pcie", 00:27:07.693 "traddr": "0000:00:06.0", 00:27:07.693 "name": "Nvme0" 00:27:07.693 }, 00:27:07.693 "method": "bdev_nvme_attach_controller" 00:27:07.693 }, 00:27:07.693 { 00:27:07.693 "method": "bdev_wait_for_examine" 00:27:07.693 } 00:27:07.693 ] 00:27:07.693 } 00:27:07.693 ] 00:27:07.693 } 00:27:07.693 [2024-07-11 16:42:44.461650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.950 [2024-07-11 16:42:44.615913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.145  Copying: 56/56 [kB] (average 27 MBps) 00:27:09.145 00:27:09.145 16:42:45 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:09.145 16:42:45 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:09.145 16:42:45 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:09.145 16:42:45 -- dd/common.sh@11 -- # local nvme_ref= 00:27:09.145 16:42:45 -- dd/common.sh@12 -- # local size=57344 00:27:09.145 16:42:45 -- dd/common.sh@14 -- # local bs=1048576 00:27:09.145 16:42:45 -- dd/common.sh@15 -- # local count=1 00:27:09.145 16:42:45 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:09.145 16:42:45 -- dd/common.sh@18 -- # gen_conf 00:27:09.145 16:42:45 -- dd/common.sh@31 -- # xtrace_disable 00:27:09.145 16:42:45 -- common/autotest_common.sh@10 -- # set +x 00:27:09.145 [2024-07-11 16:42:45.920633] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:09.145 [2024-07-11 16:42:45.921340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137442 ] 00:27:09.145 { 00:27:09.145 "subsystems": [ 00:27:09.145 { 00:27:09.145 "subsystem": "bdev", 00:27:09.145 "config": [ 00:27:09.145 { 00:27:09.145 "params": { 00:27:09.145 "trtype": "pcie", 00:27:09.145 "traddr": "0000:00:06.0", 00:27:09.145 "name": "Nvme0" 00:27:09.145 }, 00:27:09.145 "method": "bdev_nvme_attach_controller" 00:27:09.145 }, 00:27:09.145 { 00:27:09.145 "method": "bdev_wait_for_examine" 00:27:09.145 } 00:27:09.145 ] 00:27:09.145 } 00:27:09.145 ] 00:27:09.145 } 00:27:09.404 [2024-07-11 16:42:46.074196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.663 [2024-07-11 16:42:46.227636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.857  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:10.857 00:27:10.857 16:42:47 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:10.857 16:42:47 -- dd/basic_rw.sh@23 -- # count=7 00:27:10.857 16:42:47 -- dd/basic_rw.sh@24 -- # count=7 00:27:10.857 16:42:47 -- dd/basic_rw.sh@25 -- # size=57344 00:27:10.857 16:42:47 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:27:10.857 16:42:47 -- dd/common.sh@98 -- # xtrace_disable 00:27:10.857 16:42:47 -- common/autotest_common.sh@10 -- # set +x 00:27:11.116 16:42:47 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:27:11.116 16:42:47 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:11.116 16:42:47 -- dd/common.sh@31 -- # xtrace_disable 00:27:11.116 16:42:47 -- common/autotest_common.sh@10 -- # set +x 00:27:11.374 { 00:27:11.374 "subsystems": [ 00:27:11.374 { 00:27:11.374 "subsystem": "bdev", 00:27:11.374 "config": [ 00:27:11.374 { 00:27:11.374 "params": { 00:27:11.374 "trtype": "pcie", 00:27:11.374 "traddr": "0000:00:06.0", 00:27:11.374 "name": "Nvme0" 00:27:11.375 }, 00:27:11.375 "method": "bdev_nvme_attach_controller" 00:27:11.375 }, 00:27:11.375 { 00:27:11.375 "method": "bdev_wait_for_examine" 00:27:11.375 } 00:27:11.375 ] 00:27:11.375 } 00:27:11.375 ] 00:27:11.375 } 00:27:11.375 [2024-07-11 16:42:47.966198] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:11.375 [2024-07-11 16:42:47.966369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137474 ] 00:27:11.375 [2024-07-11 16:42:48.130867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.633 [2024-07-11 16:42:48.284550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.828  Copying: 56/56 [kB] (average 54 MBps) 00:27:12.828 00:27:12.828 16:42:49 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:27:12.828 16:42:49 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:12.828 16:42:49 -- dd/common.sh@31 -- # xtrace_disable 00:27:12.828 16:42:49 -- common/autotest_common.sh@10 -- # set +x 00:27:12.828 [2024-07-11 16:42:49.589771] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:12.828 [2024-07-11 16:42:49.590366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137501 ] 00:27:12.828 { 00:27:12.828 "subsystems": [ 00:27:12.828 { 00:27:12.828 "subsystem": "bdev", 00:27:12.828 "config": [ 00:27:12.828 { 00:27:12.828 "params": { 00:27:12.828 "trtype": "pcie", 00:27:12.828 "traddr": "0000:00:06.0", 00:27:12.828 "name": "Nvme0" 00:27:12.828 }, 00:27:12.828 "method": "bdev_nvme_attach_controller" 00:27:12.828 }, 00:27:12.828 { 00:27:12.828 "method": "bdev_wait_for_examine" 00:27:12.828 } 00:27:12.828 ] 00:27:12.828 } 00:27:12.828 ] 00:27:12.828 } 00:27:13.087 [2024-07-11 16:42:49.746031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.345 [2024-07-11 16:42:49.905248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.540  Copying: 56/56 [kB] (average 54 MBps) 00:27:14.540 00:27:14.540 16:42:51 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:14.540 16:42:51 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:27:14.540 16:42:51 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:14.540 16:42:51 -- dd/common.sh@11 -- # local nvme_ref= 00:27:14.540 16:42:51 -- dd/common.sh@12 -- # local size=57344 00:27:14.540 16:42:51 -- dd/common.sh@14 -- # local bs=1048576 00:27:14.540 16:42:51 -- dd/common.sh@15 -- # local count=1 00:27:14.540 16:42:51 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:14.540 16:42:51 -- dd/common.sh@18 -- # gen_conf 00:27:14.540 16:42:51 -- dd/common.sh@31 -- # xtrace_disable 00:27:14.540 16:42:51 -- common/autotest_common.sh@10 -- # set +x 00:27:14.540 [2024-07-11 16:42:51.156671] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:14.540 [2024-07-11 16:42:51.156879] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137547 ] 00:27:14.540 { 00:27:14.540 "subsystems": [ 00:27:14.540 { 00:27:14.540 "subsystem": "bdev", 00:27:14.540 "config": [ 00:27:14.540 { 00:27:14.540 "params": { 00:27:14.540 "trtype": "pcie", 00:27:14.540 "traddr": "0000:00:06.0", 00:27:14.540 "name": "Nvme0" 00:27:14.540 }, 00:27:14.540 "method": "bdev_nvme_attach_controller" 00:27:14.540 }, 00:27:14.540 { 00:27:14.540 "method": "bdev_wait_for_examine" 00:27:14.540 } 00:27:14.540 ] 00:27:14.540 } 00:27:14.540 ] 00:27:14.540 } 00:27:14.540 [2024-07-11 16:42:51.321580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.798 [2024-07-11 16:42:51.480571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.993  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:15.993 00:27:15.993 16:42:52 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:27:15.993 16:42:52 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:15.993 16:42:52 -- dd/basic_rw.sh@23 -- # count=3 00:27:15.993 16:42:52 -- dd/basic_rw.sh@24 -- # count=3 00:27:15.993 16:42:52 -- dd/basic_rw.sh@25 -- # size=49152 00:27:15.993 16:42:52 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:15.993 16:42:52 -- dd/common.sh@98 -- # xtrace_disable 00:27:15.993 16:42:52 -- common/autotest_common.sh@10 -- # set +x 00:27:16.559 16:42:53 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:27:16.559 16:42:53 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:16.559 16:42:53 -- dd/common.sh@31 -- # xtrace_disable 00:27:16.559 16:42:53 -- common/autotest_common.sh@10 -- # set +x 00:27:16.559 [2024-07-11 16:42:53.249639] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:16.559 [2024-07-11 16:42:53.249856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137579 ] 00:27:16.559 { 00:27:16.559 "subsystems": [ 00:27:16.559 { 00:27:16.559 "subsystem": "bdev", 00:27:16.559 "config": [ 00:27:16.559 { 00:27:16.559 "params": { 00:27:16.559 "trtype": "pcie", 00:27:16.559 "traddr": "0000:00:06.0", 00:27:16.559 "name": "Nvme0" 00:27:16.559 }, 00:27:16.559 "method": "bdev_nvme_attach_controller" 00:27:16.559 }, 00:27:16.559 { 00:27:16.559 "method": "bdev_wait_for_examine" 00:27:16.559 } 00:27:16.559 ] 00:27:16.559 } 00:27:16.559 ] 00:27:16.559 } 00:27:16.817 [2024-07-11 16:42:53.413266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.817 [2024-07-11 16:42:53.567778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.948  Copying: 48/48 [kB] (average 46 MBps) 00:27:17.948 00:27:17.948 16:42:54 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:27:17.948 16:42:54 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:17.948 16:42:54 -- dd/common.sh@31 -- # xtrace_disable 00:27:17.948 16:42:54 -- common/autotest_common.sh@10 -- # set +x 00:27:18.206 [2024-07-11 16:42:54.800278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:18.206 [2024-07-11 16:42:54.800908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137599 ] 00:27:18.206 { 00:27:18.206 "subsystems": [ 00:27:18.206 { 00:27:18.206 "subsystem": "bdev", 00:27:18.206 "config": [ 00:27:18.206 { 00:27:18.206 "params": { 00:27:18.206 "trtype": "pcie", 00:27:18.206 "traddr": "0000:00:06.0", 00:27:18.206 "name": "Nvme0" 00:27:18.206 }, 00:27:18.206 "method": "bdev_nvme_attach_controller" 00:27:18.206 }, 00:27:18.206 { 00:27:18.206 "method": "bdev_wait_for_examine" 00:27:18.206 } 00:27:18.206 ] 00:27:18.206 } 00:27:18.206 ] 00:27:18.206 } 00:27:18.206 [2024-07-11 16:42:54.957056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.465 [2024-07-11 16:42:55.121830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.656  Copying: 48/48 [kB] (average 46 MBps) 00:27:19.656 00:27:19.656 16:42:56 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:19.656 16:42:56 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:19.656 16:42:56 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:19.656 16:42:56 -- dd/common.sh@11 -- # local nvme_ref= 00:27:19.656 16:42:56 -- dd/common.sh@12 -- # local size=49152 00:27:19.656 16:42:56 -- dd/common.sh@14 -- # local bs=1048576 00:27:19.656 16:42:56 -- dd/common.sh@15 -- # local count=1 00:27:19.656 16:42:56 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:19.656 16:42:56 -- dd/common.sh@18 -- # gen_conf 00:27:19.656 16:42:56 -- dd/common.sh@31 -- # xtrace_disable 00:27:19.656 16:42:56 -- common/autotest_common.sh@10 -- # set +x 00:27:19.656 [2024-07-11 16:42:56.441356] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:19.656 [2024-07-11 16:42:56.442248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137632 ] 00:27:19.656 { 00:27:19.656 "subsystems": [ 00:27:19.656 { 00:27:19.656 "subsystem": "bdev", 00:27:19.656 "config": [ 00:27:19.656 { 00:27:19.656 "params": { 00:27:19.656 "trtype": "pcie", 00:27:19.656 "traddr": "0000:00:06.0", 00:27:19.656 "name": "Nvme0" 00:27:19.656 }, 00:27:19.656 "method": "bdev_nvme_attach_controller" 00:27:19.656 }, 00:27:19.656 { 00:27:19.656 "method": "bdev_wait_for_examine" 00:27:19.656 } 00:27:19.656 ] 00:27:19.656 } 00:27:19.656 ] 00:27:19.656 } 00:27:19.915 [2024-07-11 16:42:56.609852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.173 [2024-07-11 16:42:56.764347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.366  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:21.366 00:27:21.366 16:42:57 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:27:21.366 16:42:57 -- dd/basic_rw.sh@23 -- # count=3 00:27:21.366 16:42:57 -- dd/basic_rw.sh@24 -- # count=3 00:27:21.366 16:42:57 -- dd/basic_rw.sh@25 -- # size=49152 00:27:21.366 16:42:57 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:27:21.366 16:42:57 -- dd/common.sh@98 -- # xtrace_disable 00:27:21.366 16:42:57 -- common/autotest_common.sh@10 -- # set +x 00:27:21.624 16:42:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:27:21.624 16:42:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:27:21.624 16:42:58 -- dd/common.sh@31 -- # xtrace_disable 00:27:21.624 16:42:58 -- common/autotest_common.sh@10 -- # set +x 00:27:21.882 [2024-07-11 16:42:58.446236] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:21.882 [2024-07-11 16:42:58.447223] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137659 ] 00:27:21.882 { 00:27:21.882 "subsystems": [ 00:27:21.882 { 00:27:21.882 "subsystem": "bdev", 00:27:21.882 "config": [ 00:27:21.882 { 00:27:21.882 "params": { 00:27:21.882 "trtype": "pcie", 00:27:21.882 "traddr": "0000:00:06.0", 00:27:21.882 "name": "Nvme0" 00:27:21.882 }, 00:27:21.882 "method": "bdev_nvme_attach_controller" 00:27:21.882 }, 00:27:21.882 { 00:27:21.882 "method": "bdev_wait_for_examine" 00:27:21.882 } 00:27:21.882 ] 00:27:21.882 } 00:27:21.882 ] 00:27:21.882 } 00:27:21.882 [2024-07-11 16:42:58.612872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.140 [2024-07-11 16:42:58.774897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.333  Copying: 48/48 [kB] (average 46 MBps) 00:27:23.333 00:27:23.333 16:43:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:27:23.333 16:43:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:23.333 16:43:00 -- dd/common.sh@31 -- # xtrace_disable 00:27:23.333 16:43:00 -- common/autotest_common.sh@10 -- # set +x 00:27:23.333 { 00:27:23.333 "subsystems": [ 00:27:23.333 { 00:27:23.333 "subsystem": "bdev", 00:27:23.333 "config": [ 00:27:23.333 { 00:27:23.333 "params": { 00:27:23.333 "trtype": "pcie", 00:27:23.333 "traddr": "0000:00:06.0", 00:27:23.333 "name": "Nvme0" 00:27:23.333 }, 00:27:23.333 "method": "bdev_nvme_attach_controller" 00:27:23.333 }, 00:27:23.333 { 00:27:23.333 "method": "bdev_wait_for_examine" 00:27:23.333 } 00:27:23.333 ] 00:27:23.333 } 00:27:23.333 ] 00:27:23.333 } 00:27:23.333 [2024-07-11 16:43:00.090925] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:23.333 [2024-07-11 16:43:00.091779] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137684 ] 00:27:23.592 [2024-07-11 16:43:00.254997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.850 [2024-07-11 16:43:00.416347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.058  Copying: 48/48 [kB] (average 46 MBps) 00:27:25.058 00:27:25.058 16:43:01 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:25.058 16:43:01 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:25.058 16:43:01 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:25.058 16:43:01 -- dd/common.sh@11 -- # local nvme_ref= 00:27:25.058 16:43:01 -- dd/common.sh@12 -- # local size=49152 00:27:25.058 16:43:01 -- dd/common.sh@14 -- # local bs=1048576 00:27:25.058 16:43:01 -- dd/common.sh@15 -- # local count=1 00:27:25.058 16:43:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:25.058 16:43:01 -- dd/common.sh@18 -- # gen_conf 00:27:25.058 16:43:01 -- dd/common.sh@31 -- # xtrace_disable 00:27:25.058 16:43:01 -- common/autotest_common.sh@10 -- # set +x 00:27:25.058 [2024-07-11 16:43:01.658783] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:25.058 [2024-07-11 16:43:01.659182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137732 ] 00:27:25.058 { 00:27:25.058 "subsystems": [ 00:27:25.058 { 00:27:25.058 "subsystem": "bdev", 00:27:25.058 "config": [ 00:27:25.058 { 00:27:25.058 "params": { 00:27:25.058 "trtype": "pcie", 00:27:25.058 "traddr": "0000:00:06.0", 00:27:25.058 "name": "Nvme0" 00:27:25.058 }, 00:27:25.058 "method": "bdev_nvme_attach_controller" 00:27:25.058 }, 00:27:25.058 { 00:27:25.058 "method": "bdev_wait_for_examine" 00:27:25.058 } 00:27:25.058 ] 00:27:25.058 } 00:27:25.058 ] 00:27:25.058 } 00:27:25.058 [2024-07-11 16:43:01.824327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.330 [2024-07-11 16:43:01.978017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.519  Copying: 1024/1024 [kB] (average 500 MBps) 00:27:26.519 00:27:26.519 ************************************ 00:27:26.519 END TEST dd_rw 00:27:26.519 ************************************ 00:27:26.519 00:27:26.519 real 0m32.000s 00:27:26.519 user 0m26.502s 00:27:26.519 sys 0m4.226s 00:27:26.519 16:43:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.519 16:43:03 -- common/autotest_common.sh@10 -- # set +x 00:27:26.519 16:43:03 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:27:26.519 16:43:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:26.519 16:43:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:26.519 16:43:03 -- common/autotest_common.sh@10 -- # set +x 00:27:26.519 ************************************ 00:27:26.519 START TEST dd_rw_offset 00:27:26.519 ************************************ 00:27:26.519 16:43:03 -- common/autotest_common.sh@1104 -- # basic_offset 00:27:26.519 16:43:03 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:27:26.519 16:43:03 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:27:26.519 16:43:03 -- dd/common.sh@98 -- # xtrace_disable 00:27:26.519 16:43:03 -- common/autotest_common.sh@10 -- # set +x 00:27:26.776 16:43:03 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:27:26.777 16:43:03 -- dd/basic_rw.sh@56 -- # data=hjly9pfu7unp5wj5f209qyae233rrc3lwevwxuqsod8j6zo1o1972i0bajt10y3vihbpg3n3v00jj9hdq64ed2j6uym1higiephpsgd8fsigykgcup3r6a9sua1va89fju48i3zzz8o72bm3se4csdotuvs95ba1xxxdt1ftk4grovhq8bg21si9wo44vxhejgd4t7679vzao0hp1m5ic3xp0bt42zfs5l9mtomidxmih9g9s17ih90dulruwjgxgxyfd6jyu8wz1h25isu3tfpsadlfp4daq6jffpspeultufe17dwevq9a0u4zhgk7519htq24ra8exms8dka4dc86wsh9126els4o94mfymgpdudifpnrqhbs09ftt823gjet88oh0h0o9gsyqsm7edojc1gj4nm5mograrooxhvzgsyfk6xvgygv4olqn46o6unq9p2wibz7yxfv8v2y0btjgt80u1fstn5nf62g23em8n71wzdux7zwsqrbgr2y8irh7gyhz61x5ckmag8mx8xk44sg8mst44rfs2oxyzyrha662anciyk7bjrus2gpnnca3rg9v1zx8d3m6oy9ibgy1b8etr89ltqjof46vxdaujcta7oybbae4f1vacipxyoi8ybnn1mbdiih76j9hzbe8njqpri8kh538uuiy3m43ci7af1jnboc9w3bhgih73yfyfxwfjlmhq63hs8brdv602u1up8zswaos1okktaqqv8syue4pridv6dm4xa8vdd2bsxvqhigkc6ptp1g5jtsfbs9pwigqiqhlz8u08esa7pf56b5d9zqdzia4ar7as44xe7zh2478qc9d6ttm05xh0qez48xl95sgrep7dr81vofbqpbiclks6egx06ps9107ayl249gw1vi8mtq35evse2i63tx3i4jmnx6fhlhltnrvlhtpfk4hof1tefvfplhrzv04w1u27qlgrijm72pl26ct6zdgp3aydv138kcvuvteptx369hiq9tiba94m2i6vzho2pa08srwvmhpfa87hxmf0nopwk0i2nwk8orsmwlnv121uexfj2yqo7tyfkybck0y7ytrd6laq7d6sof0s7f82hoolef8llm6vhbq3cv53n7cis8fotivwxpmjlg4g90css6zlw7jf4qmwavh2qezks60z8bsn96ycu5p7vbhgna88ogfqwjeoxdmch5hiwwvleac5dl2ldlpyzcohmy31wg7wsh8ygd72o415go3iujlllhbcuyr92yo2xn2jfh1yop0ze9if4j7uys8bv6irqq9wo72xh74npxu4gi18yrlkzhkw4237gus1zjw12ufvhu6ikcndm3fikd2ntpb3ep9xki034uucce0p7pazxx67b90r37g8prhs1xzzq0neu5tedaa75jbebhwrur51ytm33pupd7hrnxd0nbkmgs5r97xm24ptw0oa4qory44e1zvumuk43e0w0ni6236z10c1tgldo1xmi54ojiyqirjumw2hstm4zn3j9ek5g8l0bkt9k356wo15z24j2nim5970f9k10jfz0lll4tbwese6hf2i03bbm1mtm6mb8pgmyffc4wj2h7bl6xi5f9zjehvbzj09tswxd9qqiamx5yy3v90phr2erub8jq5vkkdm5q8v816fuu63fwplkfazwm4il6uiuh2ohd9m39ac682bb8un22o53w2loqginix0pqn5r0xqlm17r7hniwuhwsnaod0vebfsibycn8jbxp8882tt3k5xmk4ukkggksoymnzgf13aty7d48zvkv2gp2xptr5o2tkgx1s7rypn9wertvwixv1l09twkspm1lrje9obc1m5ytqo686gqp0rc4ch8l3ol1gnbm1rqky5jp1jp3adr9dw67w3bgw0qqq79zpyziltvysftsndry6d3pfxom0sdhqnkca3rskleqa6ms3ftld8d6i6hanvz5pmoyivadn3p9hcmx3uigxr7u3ftjxchohjdjs5jwrdnkg402j1rktwd991nu1j56wbzcgkbryxcv9f2e17cobsvb6ir28rjq7aweu9jkdslk8rw3opidxjb1yp54s4rgavymyr2nle0haybfjofjr2ds4jm42eyhq02g0bmwjdne0t9z888psdav9jtuzl6sspody1dem5f77ohfdsj5otdt61z3yu5ctwaoytp5shehp9g3n4kqu3d14g2a2b79zb6e0hbguy2hcp3u4d0wmn1hejeb8xn1k86t4nrx5ia2ky8ctntiqmwpk703qwe95gw2kr0lc4nk9c960yzle37pncwprpot62msgsa2rdfef2d3rm96z4akoewf58r0il7eaut2zyuiidf49uwcvu27g0vrzf8okq6npks5os3lsbex31ou18qay0tb0ebkdk2y2vfeyeozzrbtzntss6hpt7z5nxo3l9aodskm2lcm4omey0cgf1p5lad0jqqzqfxb7yynsfxecyr1b3ieif77h90en9bcmrs168q5it20b02gyyenv1nmw5b7ogrwc66dcgp4mwa908y7l9umhuuc464a4sdx0oufhaso7bxfiegtskxkfd8sgmgvpn7h56kfb4xmoaldd8xwzv7xxk9mpc0rxthid5pf8wjpoawhl7uczvq0egenok5yzkx8jtb44u2npuq6nsz3ny8m5pjcsy70i46kf1h8ifsuil7ui2jupsa715rzdn1ngxohr9u6as54j356hw12imshubv3wkvdhkvajwfrhqv315aaq9ku5n91nvsn0zv0oipltoc7mx88g8rkz5prd1ywwc6n0ab508ge8smuq0fcuxl3gmnhqqhi98cra067nvr0urymhbjkx3xpw3wla2efrl1tutci0secvu0fkyr2s5z4q8ghmezhwoq6lomzo0sujoh1faav3md76q7874a8mxxxmrx5hgo71weevkv9wzruj7v2ulhvt4vdic7w2p9cdka30wd1hmzkoi5zcvcrdv5cr1ka0b2ohdgbc67gosa4newisourmey99l1jvbm2exft86w0xsfq5napy56a89nkdbz8g9qnegqwoqtwnq9805gixolaszskajojnv6al771hnjr1ko5zlqqgn42vfxvpbgpkvirw36bre57h0ria6th65vdbjfwsbf4z4j2b2mktqdhn65mi2y9teamr1bm2vcmhf7czj6na8w0astupqn0njo8b2fjjcj6tdt0l0qysongyr28brl5r83oimfra5th0m860ng6i4cnb8v5d5blhfj5zxdds13zie1k410rizxamtp5l1eypn8kgyik5a7jdbstjjqqrqg3rga7g5wkuxj55pgwj1ufyx39jerkyecpiwgquya0psitktfg2ejkznnmsrz4x2nko3cfqgklsaagt4o3js10laiila4vmq7lrw49i86m7f8sm90ki9f3p55pk2cio88ovx8ppc0728y2gertoiuulf4rtzqflvrgt7qk9um1793tfvgiqy3nf753846ssr4l12ejargyiygpctoqgss3o44a2vlsj0he8y277fzuw0hoto0o3zttezfn3bt0j0jtl07cb9nmp6opt7lsuxtoqqtazggepqbxpoedd5atxvd3vbf7wyfmzk9oysa411gvfnqhrdqn1npfd6xq7is9di8vovwigbaypkv2stut2ddkhd3ggl6pe8y8rd72y5c76qduxvd4n7kgvkdi24i9ydko2r0nf8a1tuqwx7g9cbdk0z3ssfvo0aqj2cu00yfvlf6t89uw7q9r2l2qgbr5n93vmbqw0eogpsqz3vgc3ahx9n4rkfcgx95d3eny5r49bq0nq63y59rzuewb5tj9vouzjikwua1vwg6toos2cw43etitzdg25af4gv9qeh039ev4vm44j2chc04ad7jproi867bdl4cypmz531pgpo6yuiqfy9qx97psdz4fr6cdbnvxfrzvxz1sjz965g5n51bvpmc2vphmcewn00d60okydw3oyvjfmkg1yrpqfk7di763dmipiqwblis9 00:27:26.777 16:43:03 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:27:26.777 16:43:03 -- dd/basic_rw.sh@59 -- # gen_conf 00:27:26.777 16:43:03 -- dd/common.sh@31 -- # xtrace_disable 00:27:26.777 16:43:03 -- common/autotest_common.sh@10 -- # set +x 00:27:26.777 [2024-07-11 16:43:03.385677] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:26.777 [2024-07-11 16:43:03.385823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137779 ] 00:27:26.777 { 00:27:26.777 "subsystems": [ 00:27:26.777 { 00:27:26.777 "subsystem": "bdev", 00:27:26.777 "config": [ 00:27:26.777 { 00:27:26.777 "params": { 00:27:26.777 "trtype": "pcie", 00:27:26.777 "traddr": "0000:00:06.0", 00:27:26.777 "name": "Nvme0" 00:27:26.777 }, 00:27:26.777 "method": "bdev_nvme_attach_controller" 00:27:26.777 }, 00:27:26.777 { 00:27:26.777 "method": "bdev_wait_for_examine" 00:27:26.777 } 00:27:26.777 ] 00:27:26.777 } 00:27:26.777 ] 00:27:26.777 } 00:27:26.777 [2024-07-11 16:43:03.538222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.035 [2024-07-11 16:43:03.706237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.227  Copying: 4096/4096 [B] (average 4000 kBps) 00:27:28.227 00:27:28.227 16:43:04 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:27:28.227 16:43:04 -- dd/basic_rw.sh@65 -- # gen_conf 00:27:28.227 16:43:04 -- dd/common.sh@31 -- # xtrace_disable 00:27:28.227 16:43:04 -- common/autotest_common.sh@10 -- # set +x 00:27:28.227 { 00:27:28.227 "subsystems": [ 00:27:28.227 { 00:27:28.227 "subsystem": "bdev", 00:27:28.227 "config": [ 00:27:28.227 { 00:27:28.227 "params": { 00:27:28.227 "trtype": "pcie", 00:27:28.227 "traddr": "0000:00:06.0", 00:27:28.227 "name": "Nvme0" 00:27:28.227 }, 00:27:28.227 "method": "bdev_nvme_attach_controller" 00:27:28.227 }, 00:27:28.227 { 00:27:28.227 "method": "bdev_wait_for_examine" 00:27:28.227 } 00:27:28.227 ] 00:27:28.227 } 00:27:28.227 ] 00:27:28.227 } 00:27:28.227 [2024-07-11 16:43:04.952827] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:28.227 [2024-07-11 16:43:04.953094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137807 ] 00:27:28.486 [2024-07-11 16:43:05.116272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.486 [2024-07-11 16:43:05.277132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.990  Copying: 4096/4096 [B] (average 4000 kBps) 00:27:29.990 00:27:29.990 16:43:06 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:27:29.991 16:43:06 -- dd/basic_rw.sh@72 -- # [[ hjly9pfu7unp5wj5f209qyae233rrc3lwevwxuqsod8j6zo1o1972i0bajt10y3vihbpg3n3v00jj9hdq64ed2j6uym1higiephpsgd8fsigykgcup3r6a9sua1va89fju48i3zzz8o72bm3se4csdotuvs95ba1xxxdt1ftk4grovhq8bg21si9wo44vxhejgd4t7679vzao0hp1m5ic3xp0bt42zfs5l9mtomidxmih9g9s17ih90dulruwjgxgxyfd6jyu8wz1h25isu3tfpsadlfp4daq6jffpspeultufe17dwevq9a0u4zhgk7519htq24ra8exms8dka4dc86wsh9126els4o94mfymgpdudifpnrqhbs09ftt823gjet88oh0h0o9gsyqsm7edojc1gj4nm5mograrooxhvzgsyfk6xvgygv4olqn46o6unq9p2wibz7yxfv8v2y0btjgt80u1fstn5nf62g23em8n71wzdux7zwsqrbgr2y8irh7gyhz61x5ckmag8mx8xk44sg8mst44rfs2oxyzyrha662anciyk7bjrus2gpnnca3rg9v1zx8d3m6oy9ibgy1b8etr89ltqjof46vxdaujcta7oybbae4f1vacipxyoi8ybnn1mbdiih76j9hzbe8njqpri8kh538uuiy3m43ci7af1jnboc9w3bhgih73yfyfxwfjlmhq63hs8brdv602u1up8zswaos1okktaqqv8syue4pridv6dm4xa8vdd2bsxvqhigkc6ptp1g5jtsfbs9pwigqiqhlz8u08esa7pf56b5d9zqdzia4ar7as44xe7zh2478qc9d6ttm05xh0qez48xl95sgrep7dr81vofbqpbiclks6egx06ps9107ayl249gw1vi8mtq35evse2i63tx3i4jmnx6fhlhltnrvlhtpfk4hof1tefvfplhrzv04w1u27qlgrijm72pl26ct6zdgp3aydv138kcvuvteptx369hiq9tiba94m2i6vzho2pa08srwvmhpfa87hxmf0nopwk0i2nwk8orsmwlnv121uexfj2yqo7tyfkybck0y7ytrd6laq7d6sof0s7f82hoolef8llm6vhbq3cv53n7cis8fotivwxpmjlg4g90css6zlw7jf4qmwavh2qezks60z8bsn96ycu5p7vbhgna88ogfqwjeoxdmch5hiwwvleac5dl2ldlpyzcohmy31wg7wsh8ygd72o415go3iujlllhbcuyr92yo2xn2jfh1yop0ze9if4j7uys8bv6irqq9wo72xh74npxu4gi18yrlkzhkw4237gus1zjw12ufvhu6ikcndm3fikd2ntpb3ep9xki034uucce0p7pazxx67b90r37g8prhs1xzzq0neu5tedaa75jbebhwrur51ytm33pupd7hrnxd0nbkmgs5r97xm24ptw0oa4qory44e1zvumuk43e0w0ni6236z10c1tgldo1xmi54ojiyqirjumw2hstm4zn3j9ek5g8l0bkt9k356wo15z24j2nim5970f9k10jfz0lll4tbwese6hf2i03bbm1mtm6mb8pgmyffc4wj2h7bl6xi5f9zjehvbzj09tswxd9qqiamx5yy3v90phr2erub8jq5vkkdm5q8v816fuu63fwplkfazwm4il6uiuh2ohd9m39ac682bb8un22o53w2loqginix0pqn5r0xqlm17r7hniwuhwsnaod0vebfsibycn8jbxp8882tt3k5xmk4ukkggksoymnzgf13aty7d48zvkv2gp2xptr5o2tkgx1s7rypn9wertvwixv1l09twkspm1lrje9obc1m5ytqo686gqp0rc4ch8l3ol1gnbm1rqky5jp1jp3adr9dw67w3bgw0qqq79zpyziltvysftsndry6d3pfxom0sdhqnkca3rskleqa6ms3ftld8d6i6hanvz5pmoyivadn3p9hcmx3uigxr7u3ftjxchohjdjs5jwrdnkg402j1rktwd991nu1j56wbzcgkbryxcv9f2e17cobsvb6ir28rjq7aweu9jkdslk8rw3opidxjb1yp54s4rgavymyr2nle0haybfjofjr2ds4jm42eyhq02g0bmwjdne0t9z888psdav9jtuzl6sspody1dem5f77ohfdsj5otdt61z3yu5ctwaoytp5shehp9g3n4kqu3d14g2a2b79zb6e0hbguy2hcp3u4d0wmn1hejeb8xn1k86t4nrx5ia2ky8ctntiqmwpk703qwe95gw2kr0lc4nk9c960yzle37pncwprpot62msgsa2rdfef2d3rm96z4akoewf58r0il7eaut2zyuiidf49uwcvu27g0vrzf8okq6npks5os3lsbex31ou18qay0tb0ebkdk2y2vfeyeozzrbtzntss6hpt7z5nxo3l9aodskm2lcm4omey0cgf1p5lad0jqqzqfxb7yynsfxecyr1b3ieif77h90en9bcmrs168q5it20b02gyyenv1nmw5b7ogrwc66dcgp4mwa908y7l9umhuuc464a4sdx0oufhaso7bxfiegtskxkfd8sgmgvpn7h56kfb4xmoaldd8xwzv7xxk9mpc0rxthid5pf8wjpoawhl7uczvq0egenok5yzkx8jtb44u2npuq6nsz3ny8m5pjcsy70i46kf1h8ifsuil7ui2jupsa715rzdn1ngxohr9u6as54j356hw12imshubv3wkvdhkvajwfrhqv315aaq9ku5n91nvsn0zv0oipltoc7mx88g8rkz5prd1ywwc6n0ab508ge8smuq0fcuxl3gmnhqqhi98cra067nvr0urymhbjkx3xpw3wla2efrl1tutci0secvu0fkyr2s5z4q8ghmezhwoq6lomzo0sujoh1faav3md76q7874a8mxxxmrx5hgo71weevkv9wzruj7v2ulhvt4vdic7w2p9cdka30wd1hmzkoi5zcvcrdv5cr1ka0b2ohdgbc67gosa4newisourmey99l1jvbm2exft86w0xsfq5napy56a89nkdbz8g9qnegqwoqtwnq9805gixolaszskajojnv6al771hnjr1ko5zlqqgn42vfxvpbgpkvirw36bre57h0ria6th65vdbjfwsbf4z4j2b2mktqdhn65mi2y9teamr1bm2vcmhf7czj6na8w0astupqn0njo8b2fjjcj6tdt0l0qysongyr28brl5r83oimfra5th0m860ng6i4cnb8v5d5blhfj5zxdds13zie1k410rizxamtp5l1eypn8kgyik5a7jdbstjjqqrqg3rga7g5wkuxj55pgwj1ufyx39jerkyecpiwgquya0psitktfg2ejkznnmsrz4x2nko3cfqgklsaagt4o3js10laiila4vmq7lrw49i86m7f8sm90ki9f3p55pk2cio88ovx8ppc0728y2gertoiuulf4rtzqflvrgt7qk9um1793tfvgiqy3nf753846ssr4l12ejargyiygpctoqgss3o44a2vlsj0he8y277fzuw0hoto0o3zttezfn3bt0j0jtl07cb9nmp6opt7lsuxtoqqtazggepqbxpoedd5atxvd3vbf7wyfmzk9oysa411gvfnqhrdqn1npfd6xq7is9di8vovwigbaypkv2stut2ddkhd3ggl6pe8y8rd72y5c76qduxvd4n7kgvkdi24i9ydko2r0nf8a1tuqwx7g9cbdk0z3ssfvo0aqj2cu00yfvlf6t89uw7q9r2l2qgbr5n93vmbqw0eogpsqz3vgc3ahx9n4rkfcgx95d3eny5r49bq0nq63y59rzuewb5tj9vouzjikwua1vwg6toos2cw43etitzdg25af4gv9qeh039ev4vm44j2chc04ad7jproi867bdl4cypmz531pgpo6yuiqfy9qx97psdz4fr6cdbnvxfrzvxz1sjz965g5n51bvpmc2vphmcewn00d60okydw3oyvjfmkg1yrpqfk7di763dmipiqwblis9 == \h\j\l\y\9\p\f\u\7\u\n\p\5\w\j\5\f\2\0\9\q\y\a\e\2\3\3\r\r\c\3\l\w\e\v\w\x\u\q\s\o\d\8\j\6\z\o\1\o\1\9\7\2\i\0\b\a\j\t\1\0\y\3\v\i\h\b\p\g\3\n\3\v\0\0\j\j\9\h\d\q\6\4\e\d\2\j\6\u\y\m\1\h\i\g\i\e\p\h\p\s\g\d\8\f\s\i\g\y\k\g\c\u\p\3\r\6\a\9\s\u\a\1\v\a\8\9\f\j\u\4\8\i\3\z\z\z\8\o\7\2\b\m\3\s\e\4\c\s\d\o\t\u\v\s\9\5\b\a\1\x\x\x\d\t\1\f\t\k\4\g\r\o\v\h\q\8\b\g\2\1\s\i\9\w\o\4\4\v\x\h\e\j\g\d\4\t\7\6\7\9\v\z\a\o\0\h\p\1\m\5\i\c\3\x\p\0\b\t\4\2\z\f\s\5\l\9\m\t\o\m\i\d\x\m\i\h\9\g\9\s\1\7\i\h\9\0\d\u\l\r\u\w\j\g\x\g\x\y\f\d\6\j\y\u\8\w\z\1\h\2\5\i\s\u\3\t\f\p\s\a\d\l\f\p\4\d\a\q\6\j\f\f\p\s\p\e\u\l\t\u\f\e\1\7\d\w\e\v\q\9\a\0\u\4\z\h\g\k\7\5\1\9\h\t\q\2\4\r\a\8\e\x\m\s\8\d\k\a\4\d\c\8\6\w\s\h\9\1\2\6\e\l\s\4\o\9\4\m\f\y\m\g\p\d\u\d\i\f\p\n\r\q\h\b\s\0\9\f\t\t\8\2\3\g\j\e\t\8\8\o\h\0\h\0\o\9\g\s\y\q\s\m\7\e\d\o\j\c\1\g\j\4\n\m\5\m\o\g\r\a\r\o\o\x\h\v\z\g\s\y\f\k\6\x\v\g\y\g\v\4\o\l\q\n\4\6\o\6\u\n\q\9\p\2\w\i\b\z\7\y\x\f\v\8\v\2\y\0\b\t\j\g\t\8\0\u\1\f\s\t\n\5\n\f\6\2\g\2\3\e\m\8\n\7\1\w\z\d\u\x\7\z\w\s\q\r\b\g\r\2\y\8\i\r\h\7\g\y\h\z\6\1\x\5\c\k\m\a\g\8\m\x\8\x\k\4\4\s\g\8\m\s\t\4\4\r\f\s\2\o\x\y\z\y\r\h\a\6\6\2\a\n\c\i\y\k\7\b\j\r\u\s\2\g\p\n\n\c\a\3\r\g\9\v\1\z\x\8\d\3\m\6\o\y\9\i\b\g\y\1\b\8\e\t\r\8\9\l\t\q\j\o\f\4\6\v\x\d\a\u\j\c\t\a\7\o\y\b\b\a\e\4\f\1\v\a\c\i\p\x\y\o\i\8\y\b\n\n\1\m\b\d\i\i\h\7\6\j\9\h\z\b\e\8\n\j\q\p\r\i\8\k\h\5\3\8\u\u\i\y\3\m\4\3\c\i\7\a\f\1\j\n\b\o\c\9\w\3\b\h\g\i\h\7\3\y\f\y\f\x\w\f\j\l\m\h\q\6\3\h\s\8\b\r\d\v\6\0\2\u\1\u\p\8\z\s\w\a\o\s\1\o\k\k\t\a\q\q\v\8\s\y\u\e\4\p\r\i\d\v\6\d\m\4\x\a\8\v\d\d\2\b\s\x\v\q\h\i\g\k\c\6\p\t\p\1\g\5\j\t\s\f\b\s\9\p\w\i\g\q\i\q\h\l\z\8\u\0\8\e\s\a\7\p\f\5\6\b\5\d\9\z\q\d\z\i\a\4\a\r\7\a\s\4\4\x\e\7\z\h\2\4\7\8\q\c\9\d\6\t\t\m\0\5\x\h\0\q\e\z\4\8\x\l\9\5\s\g\r\e\p\7\d\r\8\1\v\o\f\b\q\p\b\i\c\l\k\s\6\e\g\x\0\6\p\s\9\1\0\7\a\y\l\2\4\9\g\w\1\v\i\8\m\t\q\3\5\e\v\s\e\2\i\6\3\t\x\3\i\4\j\m\n\x\6\f\h\l\h\l\t\n\r\v\l\h\t\p\f\k\4\h\o\f\1\t\e\f\v\f\p\l\h\r\z\v\0\4\w\1\u\2\7\q\l\g\r\i\j\m\7\2\p\l\2\6\c\t\6\z\d\g\p\3\a\y\d\v\1\3\8\k\c\v\u\v\t\e\p\t\x\3\6\9\h\i\q\9\t\i\b\a\9\4\m\2\i\6\v\z\h\o\2\p\a\0\8\s\r\w\v\m\h\p\f\a\8\7\h\x\m\f\0\n\o\p\w\k\0\i\2\n\w\k\8\o\r\s\m\w\l\n\v\1\2\1\u\e\x\f\j\2\y\q\o\7\t\y\f\k\y\b\c\k\0\y\7\y\t\r\d\6\l\a\q\7\d\6\s\o\f\0\s\7\f\8\2\h\o\o\l\e\f\8\l\l\m\6\v\h\b\q\3\c\v\5\3\n\7\c\i\s\8\f\o\t\i\v\w\x\p\m\j\l\g\4\g\9\0\c\s\s\6\z\l\w\7\j\f\4\q\m\w\a\v\h\2\q\e\z\k\s\6\0\z\8\b\s\n\9\6\y\c\u\5\p\7\v\b\h\g\n\a\8\8\o\g\f\q\w\j\e\o\x\d\m\c\h\5\h\i\w\w\v\l\e\a\c\5\d\l\2\l\d\l\p\y\z\c\o\h\m\y\3\1\w\g\7\w\s\h\8\y\g\d\7\2\o\4\1\5\g\o\3\i\u\j\l\l\l\h\b\c\u\y\r\9\2\y\o\2\x\n\2\j\f\h\1\y\o\p\0\z\e\9\i\f\4\j\7\u\y\s\8\b\v\6\i\r\q\q\9\w\o\7\2\x\h\7\4\n\p\x\u\4\g\i\1\8\y\r\l\k\z\h\k\w\4\2\3\7\g\u\s\1\z\j\w\1\2\u\f\v\h\u\6\i\k\c\n\d\m\3\f\i\k\d\2\n\t\p\b\3\e\p\9\x\k\i\0\3\4\u\u\c\c\e\0\p\7\p\a\z\x\x\6\7\b\9\0\r\3\7\g\8\p\r\h\s\1\x\z\z\q\0\n\e\u\5\t\e\d\a\a\7\5\j\b\e\b\h\w\r\u\r\5\1\y\t\m\3\3\p\u\p\d\7\h\r\n\x\d\0\n\b\k\m\g\s\5\r\9\7\x\m\2\4\p\t\w\0\o\a\4\q\o\r\y\4\4\e\1\z\v\u\m\u\k\4\3\e\0\w\0\n\i\6\2\3\6\z\1\0\c\1\t\g\l\d\o\1\x\m\i\5\4\o\j\i\y\q\i\r\j\u\m\w\2\h\s\t\m\4\z\n\3\j\9\e\k\5\g\8\l\0\b\k\t\9\k\3\5\6\w\o\1\5\z\2\4\j\2\n\i\m\5\9\7\0\f\9\k\1\0\j\f\z\0\l\l\l\4\t\b\w\e\s\e\6\h\f\2\i\0\3\b\b\m\1\m\t\m\6\m\b\8\p\g\m\y\f\f\c\4\w\j\2\h\7\b\l\6\x\i\5\f\9\z\j\e\h\v\b\z\j\0\9\t\s\w\x\d\9\q\q\i\a\m\x\5\y\y\3\v\9\0\p\h\r\2\e\r\u\b\8\j\q\5\v\k\k\d\m\5\q\8\v\8\1\6\f\u\u\6\3\f\w\p\l\k\f\a\z\w\m\4\i\l\6\u\i\u\h\2\o\h\d\9\m\3\9\a\c\6\8\2\b\b\8\u\n\2\2\o\5\3\w\2\l\o\q\g\i\n\i\x\0\p\q\n\5\r\0\x\q\l\m\1\7\r\7\h\n\i\w\u\h\w\s\n\a\o\d\0\v\e\b\f\s\i\b\y\c\n\8\j\b\x\p\8\8\8\2\t\t\3\k\5\x\m\k\4\u\k\k\g\g\k\s\o\y\m\n\z\g\f\1\3\a\t\y\7\d\4\8\z\v\k\v\2\g\p\2\x\p\t\r\5\o\2\t\k\g\x\1\s\7\r\y\p\n\9\w\e\r\t\v\w\i\x\v\1\l\0\9\t\w\k\s\p\m\1\l\r\j\e\9\o\b\c\1\m\5\y\t\q\o\6\8\6\g\q\p\0\r\c\4\c\h\8\l\3\o\l\1\g\n\b\m\1\r\q\k\y\5\j\p\1\j\p\3\a\d\r\9\d\w\6\7\w\3\b\g\w\0\q\q\q\7\9\z\p\y\z\i\l\t\v\y\s\f\t\s\n\d\r\y\6\d\3\p\f\x\o\m\0\s\d\h\q\n\k\c\a\3\r\s\k\l\e\q\a\6\m\s\3\f\t\l\d\8\d\6\i\6\h\a\n\v\z\5\p\m\o\y\i\v\a\d\n\3\p\9\h\c\m\x\3\u\i\g\x\r\7\u\3\f\t\j\x\c\h\o\h\j\d\j\s\5\j\w\r\d\n\k\g\4\0\2\j\1\r\k\t\w\d\9\9\1\n\u\1\j\5\6\w\b\z\c\g\k\b\r\y\x\c\v\9\f\2\e\1\7\c\o\b\s\v\b\6\i\r\2\8\r\j\q\7\a\w\e\u\9\j\k\d\s\l\k\8\r\w\3\o\p\i\d\x\j\b\1\y\p\5\4\s\4\r\g\a\v\y\m\y\r\2\n\l\e\0\h\a\y\b\f\j\o\f\j\r\2\d\s\4\j\m\4\2\e\y\h\q\0\2\g\0\b\m\w\j\d\n\e\0\t\9\z\8\8\8\p\s\d\a\v\9\j\t\u\z\l\6\s\s\p\o\d\y\1\d\e\m\5\f\7\7\o\h\f\d\s\j\5\o\t\d\t\6\1\z\3\y\u\5\c\t\w\a\o\y\t\p\5\s\h\e\h\p\9\g\3\n\4\k\q\u\3\d\1\4\g\2\a\2\b\7\9\z\b\6\e\0\h\b\g\u\y\2\h\c\p\3\u\4\d\0\w\m\n\1\h\e\j\e\b\8\x\n\1\k\8\6\t\4\n\r\x\5\i\a\2\k\y\8\c\t\n\t\i\q\m\w\p\k\7\0\3\q\w\e\9\5\g\w\2\k\r\0\l\c\4\n\k\9\c\9\6\0\y\z\l\e\3\7\p\n\c\w\p\r\p\o\t\6\2\m\s\g\s\a\2\r\d\f\e\f\2\d\3\r\m\9\6\z\4\a\k\o\e\w\f\5\8\r\0\i\l\7\e\a\u\t\2\z\y\u\i\i\d\f\4\9\u\w\c\v\u\2\7\g\0\v\r\z\f\8\o\k\q\6\n\p\k\s\5\o\s\3\l\s\b\e\x\3\1\o\u\1\8\q\a\y\0\t\b\0\e\b\k\d\k\2\y\2\v\f\e\y\e\o\z\z\r\b\t\z\n\t\s\s\6\h\p\t\7\z\5\n\x\o\3\l\9\a\o\d\s\k\m\2\l\c\m\4\o\m\e\y\0\c\g\f\1\p\5\l\a\d\0\j\q\q\z\q\f\x\b\7\y\y\n\s\f\x\e\c\y\r\1\b\3\i\e\i\f\7\7\h\9\0\e\n\9\b\c\m\r\s\1\6\8\q\5\i\t\2\0\b\0\2\g\y\y\e\n\v\1\n\m\w\5\b\7\o\g\r\w\c\6\6\d\c\g\p\4\m\w\a\9\0\8\y\7\l\9\u\m\h\u\u\c\4\6\4\a\4\s\d\x\0\o\u\f\h\a\s\o\7\b\x\f\i\e\g\t\s\k\x\k\f\d\8\s\g\m\g\v\p\n\7\h\5\6\k\f\b\4\x\m\o\a\l\d\d\8\x\w\z\v\7\x\x\k\9\m\p\c\0\r\x\t\h\i\d\5\p\f\8\w\j\p\o\a\w\h\l\7\u\c\z\v\q\0\e\g\e\n\o\k\5\y\z\k\x\8\j\t\b\4\4\u\2\n\p\u\q\6\n\s\z\3\n\y\8\m\5\p\j\c\s\y\7\0\i\4\6\k\f\1\h\8\i\f\s\u\i\l\7\u\i\2\j\u\p\s\a\7\1\5\r\z\d\n\1\n\g\x\o\h\r\9\u\6\a\s\5\4\j\3\5\6\h\w\1\2\i\m\s\h\u\b\v\3\w\k\v\d\h\k\v\a\j\w\f\r\h\q\v\3\1\5\a\a\q\9\k\u\5\n\9\1\n\v\s\n\0\z\v\0\o\i\p\l\t\o\c\7\m\x\8\8\g\8\r\k\z\5\p\r\d\1\y\w\w\c\6\n\0\a\b\5\0\8\g\e\8\s\m\u\q\0\f\c\u\x\l\3\g\m\n\h\q\q\h\i\9\8\c\r\a\0\6\7\n\v\r\0\u\r\y\m\h\b\j\k\x\3\x\p\w\3\w\l\a\2\e\f\r\l\1\t\u\t\c\i\0\s\e\c\v\u\0\f\k\y\r\2\s\5\z\4\q\8\g\h\m\e\z\h\w\o\q\6\l\o\m\z\o\0\s\u\j\o\h\1\f\a\a\v\3\m\d\7\6\q\7\8\7\4\a\8\m\x\x\x\m\r\x\5\h\g\o\7\1\w\e\e\v\k\v\9\w\z\r\u\j\7\v\2\u\l\h\v\t\4\v\d\i\c\7\w\2\p\9\c\d\k\a\3\0\w\d\1\h\m\z\k\o\i\5\z\c\v\c\r\d\v\5\c\r\1\k\a\0\b\2\o\h\d\g\b\c\6\7\g\o\s\a\4\n\e\w\i\s\o\u\r\m\e\y\9\9\l\1\j\v\b\m\2\e\x\f\t\8\6\w\0\x\s\f\q\5\n\a\p\y\5\6\a\8\9\n\k\d\b\z\8\g\9\q\n\e\g\q\w\o\q\t\w\n\q\9\8\0\5\g\i\x\o\l\a\s\z\s\k\a\j\o\j\n\v\6\a\l\7\7\1\h\n\j\r\1\k\o\5\z\l\q\q\g\n\4\2\v\f\x\v\p\b\g\p\k\v\i\r\w\3\6\b\r\e\5\7\h\0\r\i\a\6\t\h\6\5\v\d\b\j\f\w\s\b\f\4\z\4\j\2\b\2\m\k\t\q\d\h\n\6\5\m\i\2\y\9\t\e\a\m\r\1\b\m\2\v\c\m\h\f\7\c\z\j\6\n\a\8\w\0\a\s\t\u\p\q\n\0\n\j\o\8\b\2\f\j\j\c\j\6\t\d\t\0\l\0\q\y\s\o\n\g\y\r\2\8\b\r\l\5\r\8\3\o\i\m\f\r\a\5\t\h\0\m\8\6\0\n\g\6\i\4\c\n\b\8\v\5\d\5\b\l\h\f\j\5\z\x\d\d\s\1\3\z\i\e\1\k\4\1\0\r\i\z\x\a\m\t\p\5\l\1\e\y\p\n\8\k\g\y\i\k\5\a\7\j\d\b\s\t\j\j\q\q\r\q\g\3\r\g\a\7\g\5\w\k\u\x\j\5\5\p\g\w\j\1\u\f\y\x\3\9\j\e\r\k\y\e\c\p\i\w\g\q\u\y\a\0\p\s\i\t\k\t\f\g\2\e\j\k\z\n\n\m\s\r\z\4\x\2\n\k\o\3\c\f\q\g\k\l\s\a\a\g\t\4\o\3\j\s\1\0\l\a\i\i\l\a\4\v\m\q\7\l\r\w\4\9\i\8\6\m\7\f\8\s\m\9\0\k\i\9\f\3\p\5\5\p\k\2\c\i\o\8\8\o\v\x\8\p\p\c\0\7\2\8\y\2\g\e\r\t\o\i\u\u\l\f\4\r\t\z\q\f\l\v\r\g\t\7\q\k\9\u\m\1\7\9\3\t\f\v\g\i\q\y\3\n\f\7\5\3\8\4\6\s\s\r\4\l\1\2\e\j\a\r\g\y\i\y\g\p\c\t\o\q\g\s\s\3\o\4\4\a\2\v\l\s\j\0\h\e\8\y\2\7\7\f\z\u\w\0\h\o\t\o\0\o\3\z\t\t\e\z\f\n\3\b\t\0\j\0\j\t\l\0\7\c\b\9\n\m\p\6\o\p\t\7\l\s\u\x\t\o\q\q\t\a\z\g\g\e\p\q\b\x\p\o\e\d\d\5\a\t\x\v\d\3\v\b\f\7\w\y\f\m\z\k\9\o\y\s\a\4\1\1\g\v\f\n\q\h\r\d\q\n\1\n\p\f\d\6\x\q\7\i\s\9\d\i\8\v\o\v\w\i\g\b\a\y\p\k\v\2\s\t\u\t\2\d\d\k\h\d\3\g\g\l\6\p\e\8\y\8\r\d\7\2\y\5\c\7\6\q\d\u\x\v\d\4\n\7\k\g\v\k\d\i\2\4\i\9\y\d\k\o\2\r\0\n\f\8\a\1\t\u\q\w\x\7\g\9\c\b\d\k\0\z\3\s\s\f\v\o\0\a\q\j\2\c\u\0\0\y\f\v\l\f\6\t\8\9\u\w\7\q\9\r\2\l\2\q\g\b\r\5\n\9\3\v\m\b\q\w\0\e\o\g\p\s\q\z\3\v\g\c\3\a\h\x\9\n\4\r\k\f\c\g\x\9\5\d\3\e\n\y\5\r\4\9\b\q\0\n\q\6\3\y\5\9\r\z\u\e\w\b\5\t\j\9\v\o\u\z\j\i\k\w\u\a\1\v\w\g\6\t\o\o\s\2\c\w\4\3\e\t\i\t\z\d\g\2\5\a\f\4\g\v\9\q\e\h\0\3\9\e\v\4\v\m\4\4\j\2\c\h\c\0\4\a\d\7\j\p\r\o\i\8\6\7\b\d\l\4\c\y\p\m\z\5\3\1\p\g\p\o\6\y\u\i\q\f\y\9\q\x\9\7\p\s\d\z\4\f\r\6\c\d\b\n\v\x\f\r\z\v\x\z\1\s\j\z\9\6\5\g\5\n\5\1\b\v\p\m\c\2\v\p\h\m\c\e\w\n\0\0\d\6\0\o\k\y\d\w\3\o\y\v\j\f\m\k\g\1\y\r\p\q\f\k\7\d\i\7\6\3\d\m\i\p\i\q\w\b\l\i\s\9 ]] 00:27:29.991 00:27:29.991 real 0m3.251s 00:27:29.991 user 0m2.670s 00:27:29.991 sys 0m0.440s 00:27:29.991 16:43:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.991 16:43:06 -- common/autotest_common.sh@10 -- # set +x 00:27:29.991 ************************************ 00:27:29.991 END TEST dd_rw_offset 00:27:29.991 ************************************ 00:27:29.991 16:43:06 -- dd/basic_rw.sh@1 -- # cleanup 00:27:29.991 16:43:06 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:27:29.991 16:43:06 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:29.991 16:43:06 -- dd/common.sh@11 -- # local nvme_ref= 00:27:29.991 16:43:06 -- dd/common.sh@12 -- # local size=0xffff 00:27:29.991 16:43:06 -- dd/common.sh@14 -- # local bs=1048576 00:27:29.991 16:43:06 -- dd/common.sh@15 -- # local count=1 00:27:29.991 16:43:06 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:29.991 16:43:06 -- dd/common.sh@18 -- # gen_conf 00:27:29.991 16:43:06 -- dd/common.sh@31 -- # xtrace_disable 00:27:29.991 16:43:06 -- common/autotest_common.sh@10 -- # set +x 00:27:29.991 [2024-07-11 16:43:06.635232] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:29.991 [2024-07-11 16:43:06.635429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137850 ] 00:27:29.991 { 00:27:29.991 "subsystems": [ 00:27:29.991 { 00:27:29.991 "subsystem": "bdev", 00:27:29.991 "config": [ 00:27:29.991 { 00:27:29.991 "params": { 00:27:29.991 "trtype": "pcie", 00:27:29.991 "traddr": "0000:00:06.0", 00:27:29.991 "name": "Nvme0" 00:27:29.991 }, 00:27:29.991 "method": "bdev_nvme_attach_controller" 00:27:29.991 }, 00:27:29.991 { 00:27:29.991 "method": "bdev_wait_for_examine" 00:27:29.991 } 00:27:29.991 ] 00:27:29.991 } 00:27:29.991 ] 00:27:29.991 } 00:27:30.250 [2024-07-11 16:43:06.800402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.250 [2024-07-11 16:43:06.954285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.444  Copying: 1024/1024 [kB] (average 1000 MBps) 00:27:31.444 00:27:31.444 16:43:08 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:31.444 ************************************ 00:27:31.444 END TEST spdk_dd_basic_rw 00:27:31.444 ************************************ 00:27:31.444 00:27:31.444 real 0m38.998s 00:27:31.444 user 0m32.115s 00:27:31.444 sys 0m5.266s 00:27:31.444 16:43:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.444 16:43:08 -- common/autotest_common.sh@10 -- # set +x 00:27:31.444 16:43:08 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:31.444 16:43:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:31.444 16:43:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:31.444 16:43:08 -- common/autotest_common.sh@10 -- # set +x 00:27:31.444 ************************************ 00:27:31.444 START TEST spdk_dd_posix 00:27:31.444 ************************************ 00:27:31.444 16:43:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:31.444 * Looking for test storage... 00:27:31.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:31.445 16:43:08 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:31.445 16:43:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.445 16:43:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.445 16:43:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.445 16:43:08 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:31.445 16:43:08 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:31.445 16:43:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:31.445 16:43:08 -- paths/export.sh@5 -- # export PATH 00:27:31.445 16:43:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:31.445 16:43:08 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:27:31.445 16:43:08 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:27:31.445 16:43:08 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:27:31.445 16:43:08 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:27:31.445 16:43:08 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:31.445 16:43:08 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:31.445 16:43:08 -- dd/posix.sh@130 -- # tests 00:27:31.445 16:43:08 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:27:31.445 * First test run, using AIO 00:27:31.445 16:43:08 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:27:31.445 16:43:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:31.445 16:43:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:31.445 16:43:08 -- common/autotest_common.sh@10 -- # set +x 00:27:31.703 ************************************ 00:27:31.703 START TEST dd_flag_append 00:27:31.703 ************************************ 00:27:31.703 16:43:08 -- common/autotest_common.sh@1104 -- # append 00:27:31.703 16:43:08 -- dd/posix.sh@16 -- # local dump0 00:27:31.703 16:43:08 -- dd/posix.sh@17 -- # local dump1 00:27:31.703 16:43:08 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:31.703 16:43:08 -- dd/common.sh@98 -- # xtrace_disable 00:27:31.703 16:43:08 -- common/autotest_common.sh@10 -- # set +x 00:27:31.703 16:43:08 -- dd/posix.sh@19 -- # dump0=vhbwybk83k3xk3rneu54d1wk8smzaefn 00:27:31.703 16:43:08 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:31.703 16:43:08 -- dd/common.sh@98 -- # xtrace_disable 00:27:31.703 16:43:08 -- common/autotest_common.sh@10 -- # set +x 00:27:31.703 16:43:08 -- dd/posix.sh@20 -- # dump1=hmmypkxtb8dapmtl6erwmbc6eb8h1v8v 00:27:31.703 16:43:08 -- dd/posix.sh@22 -- # printf %s vhbwybk83k3xk3rneu54d1wk8smzaefn 00:27:31.703 16:43:08 -- dd/posix.sh@23 -- # printf %s hmmypkxtb8dapmtl6erwmbc6eb8h1v8v 00:27:31.703 16:43:08 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:31.703 [2024-07-11 16:43:08.316135] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:31.703 [2024-07-11 16:43:08.317135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137919 ] 00:27:31.703 [2024-07-11 16:43:08.487834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.962 [2024-07-11 16:43:08.653310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.155  Copying: 32/32 [B] (average 31 kBps) 00:27:33.155 00:27:33.155 16:43:09 -- dd/posix.sh@27 -- # [[ hmmypkxtb8dapmtl6erwmbc6eb8h1v8vvhbwybk83k3xk3rneu54d1wk8smzaefn == \h\m\m\y\p\k\x\t\b\8\d\a\p\m\t\l\6\e\r\w\m\b\c\6\e\b\8\h\1\v\8\v\v\h\b\w\y\b\k\8\3\k\3\x\k\3\r\n\e\u\5\4\d\1\w\k\8\s\m\z\a\e\f\n ]] 00:27:33.155 00:27:33.155 real 0m1.579s 00:27:33.155 user 0m1.221s 00:27:33.155 sys 0m0.229s 00:27:33.155 16:43:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:33.155 16:43:09 -- common/autotest_common.sh@10 -- # set +x 00:27:33.155 ************************************ 00:27:33.155 END TEST dd_flag_append 00:27:33.155 ************************************ 00:27:33.155 16:43:09 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:27:33.155 16:43:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:33.155 16:43:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:33.155 16:43:09 -- common/autotest_common.sh@10 -- # set +x 00:27:33.155 ************************************ 00:27:33.155 START TEST dd_flag_directory 00:27:33.155 ************************************ 00:27:33.155 16:43:09 -- common/autotest_common.sh@1104 -- # directory 00:27:33.155 16:43:09 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:33.155 16:43:09 -- common/autotest_common.sh@640 -- # local es=0 00:27:33.156 16:43:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:33.156 16:43:09 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.156 16:43:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.156 16:43:09 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.156 16:43:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.156 16:43:09 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.156 16:43:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:33.156 16:43:09 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:33.156 16:43:09 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:33.156 16:43:09 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:33.156 [2024-07-11 16:43:09.944695] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:33.156 [2024-07-11 16:43:09.945075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137973 ] 00:27:33.413 [2024-07-11 16:43:10.109262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.672 [2024-07-11 16:43:10.263472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.930 [2024-07-11 16:43:10.521322] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:33.930 [2024-07-11 16:43:10.521392] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:33.930 [2024-07-11 16:43:10.521432] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:34.495 [2024-07-11 16:43:11.103274] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:34.753 16:43:11 -- common/autotest_common.sh@643 -- # es=236 00:27:34.753 16:43:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:34.753 16:43:11 -- common/autotest_common.sh@652 -- # es=108 00:27:34.753 16:43:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:34.753 16:43:11 -- common/autotest_common.sh@660 -- # es=1 00:27:34.753 16:43:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:34.753 16:43:11 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:34.753 16:43:11 -- common/autotest_common.sh@640 -- # local es=0 00:27:34.753 16:43:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:34.753 16:43:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:34.753 16:43:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:34.753 16:43:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:34.753 16:43:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:34.753 16:43:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:34.753 16:43:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:34.753 16:43:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:34.753 16:43:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:34.753 16:43:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:34.753 [2024-07-11 16:43:11.487403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:34.753 [2024-07-11 16:43:11.487609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138019 ] 00:27:35.010 [2024-07-11 16:43:11.653904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.010 [2024-07-11 16:43:11.806874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.268 [2024-07-11 16:43:12.051313] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:35.268 [2024-07-11 16:43:12.051397] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:35.268 [2024-07-11 16:43:12.051440] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:35.834 [2024-07-11 16:43:12.627214] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:36.398 16:43:12 -- common/autotest_common.sh@643 -- # es=236 00:27:36.398 16:43:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:36.398 16:43:12 -- common/autotest_common.sh@652 -- # es=108 00:27:36.398 16:43:12 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:36.398 16:43:12 -- common/autotest_common.sh@660 -- # es=1 00:27:36.398 16:43:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:36.398 00:27:36.398 real 0m3.064s 00:27:36.398 user 0m2.462s 00:27:36.398 sys 0m0.401s 00:27:36.398 16:43:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.398 ************************************ 00:27:36.398 END TEST dd_flag_directory 00:27:36.398 ************************************ 00:27:36.398 16:43:12 -- common/autotest_common.sh@10 -- # set +x 00:27:36.398 16:43:12 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:27:36.398 16:43:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:36.398 16:43:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:36.398 16:43:12 -- common/autotest_common.sh@10 -- # set +x 00:27:36.398 ************************************ 00:27:36.398 START TEST dd_flag_nofollow 00:27:36.398 ************************************ 00:27:36.398 16:43:12 -- common/autotest_common.sh@1104 -- # nofollow 00:27:36.398 16:43:12 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:36.398 16:43:12 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:36.398 16:43:12 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:36.398 16:43:12 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:36.398 16:43:12 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:36.398 16:43:12 -- common/autotest_common.sh@640 -- # local es=0 00:27:36.398 16:43:12 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:36.398 16:43:12 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.398 16:43:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:36.398 16:43:12 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.398 16:43:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:36.398 16:43:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.398 16:43:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:36.398 16:43:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:36.398 16:43:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:36.398 16:43:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:36.398 [2024-07-11 16:43:13.061471] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:36.398 [2024-07-11 16:43:13.061806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138057 ] 00:27:36.656 [2024-07-11 16:43:13.225889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.656 [2024-07-11 16:43:13.384642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.914 [2024-07-11 16:43:13.636869] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:36.914 [2024-07-11 16:43:13.636969] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:36.914 [2024-07-11 16:43:13.637011] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:37.481 [2024-07-11 16:43:14.209656] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:37.741 16:43:14 -- common/autotest_common.sh@643 -- # es=216 00:27:37.741 16:43:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:37.741 16:43:14 -- common/autotest_common.sh@652 -- # es=88 00:27:37.741 16:43:14 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:37.741 16:43:14 -- common/autotest_common.sh@660 -- # es=1 00:27:37.741 16:43:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:37.741 16:43:14 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:37.741 16:43:14 -- common/autotest_common.sh@640 -- # local es=0 00:27:37.741 16:43:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:37.741 16:43:14 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:37.741 16:43:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:37.741 16:43:14 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:37.741 16:43:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:37.741 16:43:14 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:37.741 16:43:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:37.741 16:43:14 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:37.741 16:43:14 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:37.741 16:43:14 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:38.000 [2024-07-11 16:43:14.580858] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:38.000 [2024-07-11 16:43:14.581042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138085 ] 00:27:38.000 [2024-07-11 16:43:14.731289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.259 [2024-07-11 16:43:14.898757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.518 [2024-07-11 16:43:15.148064] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:38.518 [2024-07-11 16:43:15.148151] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:38.518 [2024-07-11 16:43:15.148194] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:39.085 [2024-07-11 16:43:15.716432] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:39.343 16:43:16 -- common/autotest_common.sh@643 -- # es=216 00:27:39.343 16:43:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:39.343 16:43:16 -- common/autotest_common.sh@652 -- # es=88 00:27:39.344 16:43:16 -- common/autotest_common.sh@653 -- # case "$es" in 00:27:39.344 16:43:16 -- common/autotest_common.sh@660 -- # es=1 00:27:39.344 16:43:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:39.344 16:43:16 -- dd/posix.sh@46 -- # gen_bytes 512 00:27:39.344 16:43:16 -- dd/common.sh@98 -- # xtrace_disable 00:27:39.344 16:43:16 -- common/autotest_common.sh@10 -- # set +x 00:27:39.344 16:43:16 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:39.344 [2024-07-11 16:43:16.111268] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:39.344 [2024-07-11 16:43:16.112281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138104 ] 00:27:39.602 [2024-07-11 16:43:16.282240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.861 [2024-07-11 16:43:16.442435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.055  Copying: 512/512 [B] (average 500 kBps) 00:27:41.055 00:27:41.055 16:43:17 -- dd/posix.sh@49 -- # [[ o04w1n6yodynsm1e8oyxxiruflnfqdw4woalke5c8u0my00ufw8efm34x9ptpelhb9ra4fdzey0rnpi3ilff15hb1qedtlbw98l7j1c589rumn3b4iapmq49i26rhlbskc38tmpxgzudco0wnz0pdn0z53024vqisklvcj9uvbiilluhi3t927tr4mku6m3knminvk7wiba05uftmx3fu7yjw1kj9oy4jj41406a6osekn8t7c4iztobqzfs3o7bp5kz78v7s6iulxjrelkr4edecsfoponj9uirwwmgxjjvjs44b1akqra700iyybro7bk2v1ynexehkk0xhbi3o3ek4qmnybrt2tjtrz2jviwy4autvj06hncidlkvpt9jl34bkoo02n9ese5ovn3s40phxrh1o1bb845fio5t4gytcbwollgcu6zvptcv6u245oo9xj2faxrop23lwgwp78xypvyhpme22y3oc7ucargsvzbbead3woop5bggeiz4 == \o\0\4\w\1\n\6\y\o\d\y\n\s\m\1\e\8\o\y\x\x\i\r\u\f\l\n\f\q\d\w\4\w\o\a\l\k\e\5\c\8\u\0\m\y\0\0\u\f\w\8\e\f\m\3\4\x\9\p\t\p\e\l\h\b\9\r\a\4\f\d\z\e\y\0\r\n\p\i\3\i\l\f\f\1\5\h\b\1\q\e\d\t\l\b\w\9\8\l\7\j\1\c\5\8\9\r\u\m\n\3\b\4\i\a\p\m\q\4\9\i\2\6\r\h\l\b\s\k\c\3\8\t\m\p\x\g\z\u\d\c\o\0\w\n\z\0\p\d\n\0\z\5\3\0\2\4\v\q\i\s\k\l\v\c\j\9\u\v\b\i\i\l\l\u\h\i\3\t\9\2\7\t\r\4\m\k\u\6\m\3\k\n\m\i\n\v\k\7\w\i\b\a\0\5\u\f\t\m\x\3\f\u\7\y\j\w\1\k\j\9\o\y\4\j\j\4\1\4\0\6\a\6\o\s\e\k\n\8\t\7\c\4\i\z\t\o\b\q\z\f\s\3\o\7\b\p\5\k\z\7\8\v\7\s\6\i\u\l\x\j\r\e\l\k\r\4\e\d\e\c\s\f\o\p\o\n\j\9\u\i\r\w\w\m\g\x\j\j\v\j\s\4\4\b\1\a\k\q\r\a\7\0\0\i\y\y\b\r\o\7\b\k\2\v\1\y\n\e\x\e\h\k\k\0\x\h\b\i\3\o\3\e\k\4\q\m\n\y\b\r\t\2\t\j\t\r\z\2\j\v\i\w\y\4\a\u\t\v\j\0\6\h\n\c\i\d\l\k\v\p\t\9\j\l\3\4\b\k\o\o\0\2\n\9\e\s\e\5\o\v\n\3\s\4\0\p\h\x\r\h\1\o\1\b\b\8\4\5\f\i\o\5\t\4\g\y\t\c\b\w\o\l\l\g\c\u\6\z\v\p\t\c\v\6\u\2\4\5\o\o\9\x\j\2\f\a\x\r\o\p\2\3\l\w\g\w\p\7\8\x\y\p\v\y\h\p\m\e\2\2\y\3\o\c\7\u\c\a\r\g\s\v\z\b\b\e\a\d\3\w\o\o\p\5\b\g\g\e\i\z\4 ]] 00:27:41.055 ************************************ 00:27:41.055 END TEST dd_flag_nofollow 00:27:41.055 ************************************ 00:27:41.055 00:27:41.055 real 0m4.632s 00:27:41.055 user 0m3.726s 00:27:41.055 sys 0m0.574s 00:27:41.055 16:43:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:41.055 16:43:17 -- common/autotest_common.sh@10 -- # set +x 00:27:41.055 16:43:17 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:27:41.055 16:43:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:41.055 16:43:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:41.055 16:43:17 -- common/autotest_common.sh@10 -- # set +x 00:27:41.055 ************************************ 00:27:41.055 START TEST dd_flag_noatime 00:27:41.055 ************************************ 00:27:41.055 16:43:17 -- common/autotest_common.sh@1104 -- # noatime 00:27:41.055 16:43:17 -- dd/posix.sh@53 -- # local atime_if 00:27:41.055 16:43:17 -- dd/posix.sh@54 -- # local atime_of 00:27:41.055 16:43:17 -- dd/posix.sh@58 -- # gen_bytes 512 00:27:41.055 16:43:17 -- dd/common.sh@98 -- # xtrace_disable 00:27:41.055 16:43:17 -- common/autotest_common.sh@10 -- # set +x 00:27:41.055 16:43:17 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:41.055 16:43:17 -- dd/posix.sh@60 -- # atime_if=1720716196 00:27:41.055 16:43:17 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:41.055 16:43:17 -- dd/posix.sh@61 -- # atime_of=1720716197 00:27:41.055 16:43:17 -- dd/posix.sh@66 -- # sleep 1 00:27:41.989 16:43:18 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:41.989 [2024-07-11 16:43:18.763068] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:41.989 [2024-07-11 16:43:18.763384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138168 ] 00:27:42.247 [2024-07-11 16:43:18.929910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.505 [2024-07-11 16:43:19.082295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.706  Copying: 512/512 [B] (average 500 kBps) 00:27:43.706 00:27:43.706 16:43:20 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:43.706 16:43:20 -- dd/posix.sh@69 -- # (( atime_if == 1720716196 )) 00:27:43.706 16:43:20 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:43.706 16:43:20 -- dd/posix.sh@70 -- # (( atime_of == 1720716197 )) 00:27:43.706 16:43:20 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:43.706 [2024-07-11 16:43:20.332921] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:43.706 [2024-07-11 16:43:20.333151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138188 ] 00:27:43.706 [2024-07-11 16:43:20.499262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.964 [2024-07-11 16:43:20.653877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.156  Copying: 512/512 [B] (average 500 kBps) 00:27:45.156 00:27:45.156 16:43:21 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:45.156 16:43:21 -- dd/posix.sh@73 -- # (( atime_if < 1720716200 )) 00:27:45.156 00:27:45.156 real 0m4.182s 00:27:45.156 user 0m2.488s 00:27:45.156 sys 0m0.418s 00:27:45.156 16:43:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:45.156 16:43:21 -- common/autotest_common.sh@10 -- # set +x 00:27:45.156 ************************************ 00:27:45.156 END TEST dd_flag_noatime 00:27:45.156 ************************************ 00:27:45.156 16:43:21 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:27:45.156 16:43:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:45.156 16:43:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:45.156 16:43:21 -- common/autotest_common.sh@10 -- # set +x 00:27:45.156 ************************************ 00:27:45.156 START TEST dd_flags_misc 00:27:45.156 ************************************ 00:27:45.156 16:43:21 -- common/autotest_common.sh@1104 -- # io 00:27:45.156 16:43:21 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:27:45.156 16:43:21 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:27:45.156 16:43:21 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:27:45.156 16:43:21 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:45.157 16:43:21 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:45.157 16:43:21 -- dd/common.sh@98 -- # xtrace_disable 00:27:45.157 16:43:21 -- common/autotest_common.sh@10 -- # set +x 00:27:45.157 16:43:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:45.157 16:43:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:45.430 [2024-07-11 16:43:21.969832] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:45.430 [2024-07-11 16:43:21.969995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138251 ] 00:27:45.430 [2024-07-11 16:43:22.122463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.703 [2024-07-11 16:43:22.284585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.897  Copying: 512/512 [B] (average 500 kBps) 00:27:46.897 00:27:46.897 16:43:23 -- dd/posix.sh@93 -- # [[ szpyu8q3csnghbpnxviumvmmr9vkn5i0o0q90vib543dnpjbelzalboms55oo0n40ama0ytqb3n2rbtmkvj022fzea1h1puxopexz1aqjsuwtiil1tpf3c1hoyxh5iygay3qh4m157k6j15pdf776d9qj3n12fr9xiy4ni9lqfvfhodoljasa9koh8tli8fejhpdg8g7bp3eo4x5hw4ljj72kmldllld8zebsiiki6voiae1rvgydy2tm0zdn0rcq5bst0m69fb5l865q4sm8g4lztzbxjhuhgyjylokcve7o1zae8v67lgeq3qdfxitv6vhz9rjrfgjncjhcydwqqjob39xnodknpane44cc83ad2db7d4zozjkqpkmxea805qsiewfskjuzy4imyg2esmgu2e4kj8s2m4ckxhs9gerk5w2fjtjt0zpicqqeukjgm4cacpld0g30b5za3xavz6gqzp28udy6500gqsgpje6qqqm5c9js81t440w4sas == \s\z\p\y\u\8\q\3\c\s\n\g\h\b\p\n\x\v\i\u\m\v\m\m\r\9\v\k\n\5\i\0\o\0\q\9\0\v\i\b\5\4\3\d\n\p\j\b\e\l\z\a\l\b\o\m\s\5\5\o\o\0\n\4\0\a\m\a\0\y\t\q\b\3\n\2\r\b\t\m\k\v\j\0\2\2\f\z\e\a\1\h\1\p\u\x\o\p\e\x\z\1\a\q\j\s\u\w\t\i\i\l\1\t\p\f\3\c\1\h\o\y\x\h\5\i\y\g\a\y\3\q\h\4\m\1\5\7\k\6\j\1\5\p\d\f\7\7\6\d\9\q\j\3\n\1\2\f\r\9\x\i\y\4\n\i\9\l\q\f\v\f\h\o\d\o\l\j\a\s\a\9\k\o\h\8\t\l\i\8\f\e\j\h\p\d\g\8\g\7\b\p\3\e\o\4\x\5\h\w\4\l\j\j\7\2\k\m\l\d\l\l\l\d\8\z\e\b\s\i\i\k\i\6\v\o\i\a\e\1\r\v\g\y\d\y\2\t\m\0\z\d\n\0\r\c\q\5\b\s\t\0\m\6\9\f\b\5\l\8\6\5\q\4\s\m\8\g\4\l\z\t\z\b\x\j\h\u\h\g\y\j\y\l\o\k\c\v\e\7\o\1\z\a\e\8\v\6\7\l\g\e\q\3\q\d\f\x\i\t\v\6\v\h\z\9\r\j\r\f\g\j\n\c\j\h\c\y\d\w\q\q\j\o\b\3\9\x\n\o\d\k\n\p\a\n\e\4\4\c\c\8\3\a\d\2\d\b\7\d\4\z\o\z\j\k\q\p\k\m\x\e\a\8\0\5\q\s\i\e\w\f\s\k\j\u\z\y\4\i\m\y\g\2\e\s\m\g\u\2\e\4\k\j\8\s\2\m\4\c\k\x\h\s\9\g\e\r\k\5\w\2\f\j\t\j\t\0\z\p\i\c\q\q\e\u\k\j\g\m\4\c\a\c\p\l\d\0\g\3\0\b\5\z\a\3\x\a\v\z\6\g\q\z\p\2\8\u\d\y\6\5\0\0\g\q\s\g\p\j\e\6\q\q\q\m\5\c\9\j\s\8\1\t\4\4\0\w\4\s\a\s ]] 00:27:46.897 16:43:23 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:46.897 16:43:23 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:46.897 [2024-07-11 16:43:23.538636] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:46.897 [2024-07-11 16:43:23.538818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138274 ] 00:27:47.156 [2024-07-11 16:43:23.707296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.156 [2024-07-11 16:43:23.860771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.349  Copying: 512/512 [B] (average 500 kBps) 00:27:48.349 00:27:48.349 16:43:25 -- dd/posix.sh@93 -- # [[ szpyu8q3csnghbpnxviumvmmr9vkn5i0o0q90vib543dnpjbelzalboms55oo0n40ama0ytqb3n2rbtmkvj022fzea1h1puxopexz1aqjsuwtiil1tpf3c1hoyxh5iygay3qh4m157k6j15pdf776d9qj3n12fr9xiy4ni9lqfvfhodoljasa9koh8tli8fejhpdg8g7bp3eo4x5hw4ljj72kmldllld8zebsiiki6voiae1rvgydy2tm0zdn0rcq5bst0m69fb5l865q4sm8g4lztzbxjhuhgyjylokcve7o1zae8v67lgeq3qdfxitv6vhz9rjrfgjncjhcydwqqjob39xnodknpane44cc83ad2db7d4zozjkqpkmxea805qsiewfskjuzy4imyg2esmgu2e4kj8s2m4ckxhs9gerk5w2fjtjt0zpicqqeukjgm4cacpld0g30b5za3xavz6gqzp28udy6500gqsgpje6qqqm5c9js81t440w4sas == \s\z\p\y\u\8\q\3\c\s\n\g\h\b\p\n\x\v\i\u\m\v\m\m\r\9\v\k\n\5\i\0\o\0\q\9\0\v\i\b\5\4\3\d\n\p\j\b\e\l\z\a\l\b\o\m\s\5\5\o\o\0\n\4\0\a\m\a\0\y\t\q\b\3\n\2\r\b\t\m\k\v\j\0\2\2\f\z\e\a\1\h\1\p\u\x\o\p\e\x\z\1\a\q\j\s\u\w\t\i\i\l\1\t\p\f\3\c\1\h\o\y\x\h\5\i\y\g\a\y\3\q\h\4\m\1\5\7\k\6\j\1\5\p\d\f\7\7\6\d\9\q\j\3\n\1\2\f\r\9\x\i\y\4\n\i\9\l\q\f\v\f\h\o\d\o\l\j\a\s\a\9\k\o\h\8\t\l\i\8\f\e\j\h\p\d\g\8\g\7\b\p\3\e\o\4\x\5\h\w\4\l\j\j\7\2\k\m\l\d\l\l\l\d\8\z\e\b\s\i\i\k\i\6\v\o\i\a\e\1\r\v\g\y\d\y\2\t\m\0\z\d\n\0\r\c\q\5\b\s\t\0\m\6\9\f\b\5\l\8\6\5\q\4\s\m\8\g\4\l\z\t\z\b\x\j\h\u\h\g\y\j\y\l\o\k\c\v\e\7\o\1\z\a\e\8\v\6\7\l\g\e\q\3\q\d\f\x\i\t\v\6\v\h\z\9\r\j\r\f\g\j\n\c\j\h\c\y\d\w\q\q\j\o\b\3\9\x\n\o\d\k\n\p\a\n\e\4\4\c\c\8\3\a\d\2\d\b\7\d\4\z\o\z\j\k\q\p\k\m\x\e\a\8\0\5\q\s\i\e\w\f\s\k\j\u\z\y\4\i\m\y\g\2\e\s\m\g\u\2\e\4\k\j\8\s\2\m\4\c\k\x\h\s\9\g\e\r\k\5\w\2\f\j\t\j\t\0\z\p\i\c\q\q\e\u\k\j\g\m\4\c\a\c\p\l\d\0\g\3\0\b\5\z\a\3\x\a\v\z\6\g\q\z\p\2\8\u\d\y\6\5\0\0\g\q\s\g\p\j\e\6\q\q\q\m\5\c\9\j\s\8\1\t\4\4\0\w\4\s\a\s ]] 00:27:48.349 16:43:25 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:48.349 16:43:25 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:48.349 [2024-07-11 16:43:25.138700] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:48.349 [2024-07-11 16:43:25.138906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138301 ] 00:27:48.608 [2024-07-11 16:43:25.305949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.867 [2024-07-11 16:43:25.460910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.061  Copying: 512/512 [B] (average 250 kBps) 00:27:50.061 00:27:50.061 16:43:26 -- dd/posix.sh@93 -- # [[ szpyu8q3csnghbpnxviumvmmr9vkn5i0o0q90vib543dnpjbelzalboms55oo0n40ama0ytqb3n2rbtmkvj022fzea1h1puxopexz1aqjsuwtiil1tpf3c1hoyxh5iygay3qh4m157k6j15pdf776d9qj3n12fr9xiy4ni9lqfvfhodoljasa9koh8tli8fejhpdg8g7bp3eo4x5hw4ljj72kmldllld8zebsiiki6voiae1rvgydy2tm0zdn0rcq5bst0m69fb5l865q4sm8g4lztzbxjhuhgyjylokcve7o1zae8v67lgeq3qdfxitv6vhz9rjrfgjncjhcydwqqjob39xnodknpane44cc83ad2db7d4zozjkqpkmxea805qsiewfskjuzy4imyg2esmgu2e4kj8s2m4ckxhs9gerk5w2fjtjt0zpicqqeukjgm4cacpld0g30b5za3xavz6gqzp28udy6500gqsgpje6qqqm5c9js81t440w4sas == \s\z\p\y\u\8\q\3\c\s\n\g\h\b\p\n\x\v\i\u\m\v\m\m\r\9\v\k\n\5\i\0\o\0\q\9\0\v\i\b\5\4\3\d\n\p\j\b\e\l\z\a\l\b\o\m\s\5\5\o\o\0\n\4\0\a\m\a\0\y\t\q\b\3\n\2\r\b\t\m\k\v\j\0\2\2\f\z\e\a\1\h\1\p\u\x\o\p\e\x\z\1\a\q\j\s\u\w\t\i\i\l\1\t\p\f\3\c\1\h\o\y\x\h\5\i\y\g\a\y\3\q\h\4\m\1\5\7\k\6\j\1\5\p\d\f\7\7\6\d\9\q\j\3\n\1\2\f\r\9\x\i\y\4\n\i\9\l\q\f\v\f\h\o\d\o\l\j\a\s\a\9\k\o\h\8\t\l\i\8\f\e\j\h\p\d\g\8\g\7\b\p\3\e\o\4\x\5\h\w\4\l\j\j\7\2\k\m\l\d\l\l\l\d\8\z\e\b\s\i\i\k\i\6\v\o\i\a\e\1\r\v\g\y\d\y\2\t\m\0\z\d\n\0\r\c\q\5\b\s\t\0\m\6\9\f\b\5\l\8\6\5\q\4\s\m\8\g\4\l\z\t\z\b\x\j\h\u\h\g\y\j\y\l\o\k\c\v\e\7\o\1\z\a\e\8\v\6\7\l\g\e\q\3\q\d\f\x\i\t\v\6\v\h\z\9\r\j\r\f\g\j\n\c\j\h\c\y\d\w\q\q\j\o\b\3\9\x\n\o\d\k\n\p\a\n\e\4\4\c\c\8\3\a\d\2\d\b\7\d\4\z\o\z\j\k\q\p\k\m\x\e\a\8\0\5\q\s\i\e\w\f\s\k\j\u\z\y\4\i\m\y\g\2\e\s\m\g\u\2\e\4\k\j\8\s\2\m\4\c\k\x\h\s\9\g\e\r\k\5\w\2\f\j\t\j\t\0\z\p\i\c\q\q\e\u\k\j\g\m\4\c\a\c\p\l\d\0\g\3\0\b\5\z\a\3\x\a\v\z\6\g\q\z\p\2\8\u\d\y\6\5\0\0\g\q\s\g\p\j\e\6\q\q\q\m\5\c\9\j\s\8\1\t\4\4\0\w\4\s\a\s ]] 00:27:50.061 16:43:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:50.061 16:43:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:50.061 [2024-07-11 16:43:26.702075] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:50.061 [2024-07-11 16:43:26.702277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138325 ] 00:27:50.061 [2024-07-11 16:43:26.868183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.319 [2024-07-11 16:43:27.020679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.515  Copying: 512/512 [B] (average 250 kBps) 00:27:51.515 00:27:51.515 16:43:28 -- dd/posix.sh@93 -- # [[ szpyu8q3csnghbpnxviumvmmr9vkn5i0o0q90vib543dnpjbelzalboms55oo0n40ama0ytqb3n2rbtmkvj022fzea1h1puxopexz1aqjsuwtiil1tpf3c1hoyxh5iygay3qh4m157k6j15pdf776d9qj3n12fr9xiy4ni9lqfvfhodoljasa9koh8tli8fejhpdg8g7bp3eo4x5hw4ljj72kmldllld8zebsiiki6voiae1rvgydy2tm0zdn0rcq5bst0m69fb5l865q4sm8g4lztzbxjhuhgyjylokcve7o1zae8v67lgeq3qdfxitv6vhz9rjrfgjncjhcydwqqjob39xnodknpane44cc83ad2db7d4zozjkqpkmxea805qsiewfskjuzy4imyg2esmgu2e4kj8s2m4ckxhs9gerk5w2fjtjt0zpicqqeukjgm4cacpld0g30b5za3xavz6gqzp28udy6500gqsgpje6qqqm5c9js81t440w4sas == \s\z\p\y\u\8\q\3\c\s\n\g\h\b\p\n\x\v\i\u\m\v\m\m\r\9\v\k\n\5\i\0\o\0\q\9\0\v\i\b\5\4\3\d\n\p\j\b\e\l\z\a\l\b\o\m\s\5\5\o\o\0\n\4\0\a\m\a\0\y\t\q\b\3\n\2\r\b\t\m\k\v\j\0\2\2\f\z\e\a\1\h\1\p\u\x\o\p\e\x\z\1\a\q\j\s\u\w\t\i\i\l\1\t\p\f\3\c\1\h\o\y\x\h\5\i\y\g\a\y\3\q\h\4\m\1\5\7\k\6\j\1\5\p\d\f\7\7\6\d\9\q\j\3\n\1\2\f\r\9\x\i\y\4\n\i\9\l\q\f\v\f\h\o\d\o\l\j\a\s\a\9\k\o\h\8\t\l\i\8\f\e\j\h\p\d\g\8\g\7\b\p\3\e\o\4\x\5\h\w\4\l\j\j\7\2\k\m\l\d\l\l\l\d\8\z\e\b\s\i\i\k\i\6\v\o\i\a\e\1\r\v\g\y\d\y\2\t\m\0\z\d\n\0\r\c\q\5\b\s\t\0\m\6\9\f\b\5\l\8\6\5\q\4\s\m\8\g\4\l\z\t\z\b\x\j\h\u\h\g\y\j\y\l\o\k\c\v\e\7\o\1\z\a\e\8\v\6\7\l\g\e\q\3\q\d\f\x\i\t\v\6\v\h\z\9\r\j\r\f\g\j\n\c\j\h\c\y\d\w\q\q\j\o\b\3\9\x\n\o\d\k\n\p\a\n\e\4\4\c\c\8\3\a\d\2\d\b\7\d\4\z\o\z\j\k\q\p\k\m\x\e\a\8\0\5\q\s\i\e\w\f\s\k\j\u\z\y\4\i\m\y\g\2\e\s\m\g\u\2\e\4\k\j\8\s\2\m\4\c\k\x\h\s\9\g\e\r\k\5\w\2\f\j\t\j\t\0\z\p\i\c\q\q\e\u\k\j\g\m\4\c\a\c\p\l\d\0\g\3\0\b\5\z\a\3\x\a\v\z\6\g\q\z\p\2\8\u\d\y\6\5\0\0\g\q\s\g\p\j\e\6\q\q\q\m\5\c\9\j\s\8\1\t\4\4\0\w\4\s\a\s ]] 00:27:51.515 16:43:28 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:51.515 16:43:28 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:51.515 16:43:28 -- dd/common.sh@98 -- # xtrace_disable 00:27:51.515 16:43:28 -- common/autotest_common.sh@10 -- # set +x 00:27:51.515 16:43:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:51.515 16:43:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:51.515 [2024-07-11 16:43:28.267352] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:51.515 [2024-07-11 16:43:28.267555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138342 ] 00:27:51.773 [2024-07-11 16:43:28.434176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.032 [2024-07-11 16:43:28.608473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.226  Copying: 512/512 [B] (average 500 kBps) 00:27:53.226 00:27:53.226 16:43:29 -- dd/posix.sh@93 -- # [[ 5lb6ps2l8hgw3x5t68nkrs0b6b8273qe2c7f3o1gfw7ihkcppunyc40zl4louoradwl6ng58noed9leu1c75u6v5klbcsu74vvweqdgeg0pf3neprslrsed4k1jvy8pkkc88k024yrcq0gph1my5gk4zssuc4f77vdlg482c2lci76x1yc6l06m8sffemsdf68753delva03099blh48zb5bmsmbdj1mk1q08lbyzslteq30l5dv7hbvhve91vbjwjdcmb7phhlzcddeqkqdjs4bmp4gk9a5lezvjrmhcqki0e6yo9om75vo1nc6znxf5vf7loy7es5ipy7n2w7zprwrd2l6lh7ktuya68xv7od7ymu1shcn21sodho7ku59gbo03tel3cjmxdjae8uq46qmzk7q2vql8232ki6xiz3jkbjm6f6xvttfs3dkr8tmefshxf1dxeegw77f73dv2aplfmi96gy2w8xrnknjf8dsb8qgx51olfwqgjeavtsx == \5\l\b\6\p\s\2\l\8\h\g\w\3\x\5\t\6\8\n\k\r\s\0\b\6\b\8\2\7\3\q\e\2\c\7\f\3\o\1\g\f\w\7\i\h\k\c\p\p\u\n\y\c\4\0\z\l\4\l\o\u\o\r\a\d\w\l\6\n\g\5\8\n\o\e\d\9\l\e\u\1\c\7\5\u\6\v\5\k\l\b\c\s\u\7\4\v\v\w\e\q\d\g\e\g\0\p\f\3\n\e\p\r\s\l\r\s\e\d\4\k\1\j\v\y\8\p\k\k\c\8\8\k\0\2\4\y\r\c\q\0\g\p\h\1\m\y\5\g\k\4\z\s\s\u\c\4\f\7\7\v\d\l\g\4\8\2\c\2\l\c\i\7\6\x\1\y\c\6\l\0\6\m\8\s\f\f\e\m\s\d\f\6\8\7\5\3\d\e\l\v\a\0\3\0\9\9\b\l\h\4\8\z\b\5\b\m\s\m\b\d\j\1\m\k\1\q\0\8\l\b\y\z\s\l\t\e\q\3\0\l\5\d\v\7\h\b\v\h\v\e\9\1\v\b\j\w\j\d\c\m\b\7\p\h\h\l\z\c\d\d\e\q\k\q\d\j\s\4\b\m\p\4\g\k\9\a\5\l\e\z\v\j\r\m\h\c\q\k\i\0\e\6\y\o\9\o\m\7\5\v\o\1\n\c\6\z\n\x\f\5\v\f\7\l\o\y\7\e\s\5\i\p\y\7\n\2\w\7\z\p\r\w\r\d\2\l\6\l\h\7\k\t\u\y\a\6\8\x\v\7\o\d\7\y\m\u\1\s\h\c\n\2\1\s\o\d\h\o\7\k\u\5\9\g\b\o\0\3\t\e\l\3\c\j\m\x\d\j\a\e\8\u\q\4\6\q\m\z\k\7\q\2\v\q\l\8\2\3\2\k\i\6\x\i\z\3\j\k\b\j\m\6\f\6\x\v\t\t\f\s\3\d\k\r\8\t\m\e\f\s\h\x\f\1\d\x\e\e\g\w\7\7\f\7\3\d\v\2\a\p\l\f\m\i\9\6\g\y\2\w\8\x\r\n\k\n\j\f\8\d\s\b\8\q\g\x\5\1\o\l\f\w\q\g\j\e\a\v\t\s\x ]] 00:27:53.226 16:43:29 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:53.226 16:43:29 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:53.226 [2024-07-11 16:43:29.834672] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:53.226 [2024-07-11 16:43:29.834849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138370 ] 00:27:53.226 [2024-07-11 16:43:29.986076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.485 [2024-07-11 16:43:30.138454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.680  Copying: 512/512 [B] (average 500 kBps) 00:27:54.680 00:27:54.680 16:43:31 -- dd/posix.sh@93 -- # [[ 5lb6ps2l8hgw3x5t68nkrs0b6b8273qe2c7f3o1gfw7ihkcppunyc40zl4louoradwl6ng58noed9leu1c75u6v5klbcsu74vvweqdgeg0pf3neprslrsed4k1jvy8pkkc88k024yrcq0gph1my5gk4zssuc4f77vdlg482c2lci76x1yc6l06m8sffemsdf68753delva03099blh48zb5bmsmbdj1mk1q08lbyzslteq30l5dv7hbvhve91vbjwjdcmb7phhlzcddeqkqdjs4bmp4gk9a5lezvjrmhcqki0e6yo9om75vo1nc6znxf5vf7loy7es5ipy7n2w7zprwrd2l6lh7ktuya68xv7od7ymu1shcn21sodho7ku59gbo03tel3cjmxdjae8uq46qmzk7q2vql8232ki6xiz3jkbjm6f6xvttfs3dkr8tmefshxf1dxeegw77f73dv2aplfmi96gy2w8xrnknjf8dsb8qgx51olfwqgjeavtsx == \5\l\b\6\p\s\2\l\8\h\g\w\3\x\5\t\6\8\n\k\r\s\0\b\6\b\8\2\7\3\q\e\2\c\7\f\3\o\1\g\f\w\7\i\h\k\c\p\p\u\n\y\c\4\0\z\l\4\l\o\u\o\r\a\d\w\l\6\n\g\5\8\n\o\e\d\9\l\e\u\1\c\7\5\u\6\v\5\k\l\b\c\s\u\7\4\v\v\w\e\q\d\g\e\g\0\p\f\3\n\e\p\r\s\l\r\s\e\d\4\k\1\j\v\y\8\p\k\k\c\8\8\k\0\2\4\y\r\c\q\0\g\p\h\1\m\y\5\g\k\4\z\s\s\u\c\4\f\7\7\v\d\l\g\4\8\2\c\2\l\c\i\7\6\x\1\y\c\6\l\0\6\m\8\s\f\f\e\m\s\d\f\6\8\7\5\3\d\e\l\v\a\0\3\0\9\9\b\l\h\4\8\z\b\5\b\m\s\m\b\d\j\1\m\k\1\q\0\8\l\b\y\z\s\l\t\e\q\3\0\l\5\d\v\7\h\b\v\h\v\e\9\1\v\b\j\w\j\d\c\m\b\7\p\h\h\l\z\c\d\d\e\q\k\q\d\j\s\4\b\m\p\4\g\k\9\a\5\l\e\z\v\j\r\m\h\c\q\k\i\0\e\6\y\o\9\o\m\7\5\v\o\1\n\c\6\z\n\x\f\5\v\f\7\l\o\y\7\e\s\5\i\p\y\7\n\2\w\7\z\p\r\w\r\d\2\l\6\l\h\7\k\t\u\y\a\6\8\x\v\7\o\d\7\y\m\u\1\s\h\c\n\2\1\s\o\d\h\o\7\k\u\5\9\g\b\o\0\3\t\e\l\3\c\j\m\x\d\j\a\e\8\u\q\4\6\q\m\z\k\7\q\2\v\q\l\8\2\3\2\k\i\6\x\i\z\3\j\k\b\j\m\6\f\6\x\v\t\t\f\s\3\d\k\r\8\t\m\e\f\s\h\x\f\1\d\x\e\e\g\w\7\7\f\7\3\d\v\2\a\p\l\f\m\i\9\6\g\y\2\w\8\x\r\n\k\n\j\f\8\d\s\b\8\q\g\x\5\1\o\l\f\w\q\g\j\e\a\v\t\s\x ]] 00:27:54.680 16:43:31 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:54.680 16:43:31 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:54.680 [2024-07-11 16:43:31.387406] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:54.680 [2024-07-11 16:43:31.387528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138408 ] 00:27:54.938 [2024-07-11 16:43:31.538367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.938 [2024-07-11 16:43:31.698431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.132  Copying: 512/512 [B] (average 250 kBps) 00:27:56.132 00:27:56.132 16:43:32 -- dd/posix.sh@93 -- # [[ 5lb6ps2l8hgw3x5t68nkrs0b6b8273qe2c7f3o1gfw7ihkcppunyc40zl4louoradwl6ng58noed9leu1c75u6v5klbcsu74vvweqdgeg0pf3neprslrsed4k1jvy8pkkc88k024yrcq0gph1my5gk4zssuc4f77vdlg482c2lci76x1yc6l06m8sffemsdf68753delva03099blh48zb5bmsmbdj1mk1q08lbyzslteq30l5dv7hbvhve91vbjwjdcmb7phhlzcddeqkqdjs4bmp4gk9a5lezvjrmhcqki0e6yo9om75vo1nc6znxf5vf7loy7es5ipy7n2w7zprwrd2l6lh7ktuya68xv7od7ymu1shcn21sodho7ku59gbo03tel3cjmxdjae8uq46qmzk7q2vql8232ki6xiz3jkbjm6f6xvttfs3dkr8tmefshxf1dxeegw77f73dv2aplfmi96gy2w8xrnknjf8dsb8qgx51olfwqgjeavtsx == \5\l\b\6\p\s\2\l\8\h\g\w\3\x\5\t\6\8\n\k\r\s\0\b\6\b\8\2\7\3\q\e\2\c\7\f\3\o\1\g\f\w\7\i\h\k\c\p\p\u\n\y\c\4\0\z\l\4\l\o\u\o\r\a\d\w\l\6\n\g\5\8\n\o\e\d\9\l\e\u\1\c\7\5\u\6\v\5\k\l\b\c\s\u\7\4\v\v\w\e\q\d\g\e\g\0\p\f\3\n\e\p\r\s\l\r\s\e\d\4\k\1\j\v\y\8\p\k\k\c\8\8\k\0\2\4\y\r\c\q\0\g\p\h\1\m\y\5\g\k\4\z\s\s\u\c\4\f\7\7\v\d\l\g\4\8\2\c\2\l\c\i\7\6\x\1\y\c\6\l\0\6\m\8\s\f\f\e\m\s\d\f\6\8\7\5\3\d\e\l\v\a\0\3\0\9\9\b\l\h\4\8\z\b\5\b\m\s\m\b\d\j\1\m\k\1\q\0\8\l\b\y\z\s\l\t\e\q\3\0\l\5\d\v\7\h\b\v\h\v\e\9\1\v\b\j\w\j\d\c\m\b\7\p\h\h\l\z\c\d\d\e\q\k\q\d\j\s\4\b\m\p\4\g\k\9\a\5\l\e\z\v\j\r\m\h\c\q\k\i\0\e\6\y\o\9\o\m\7\5\v\o\1\n\c\6\z\n\x\f\5\v\f\7\l\o\y\7\e\s\5\i\p\y\7\n\2\w\7\z\p\r\w\r\d\2\l\6\l\h\7\k\t\u\y\a\6\8\x\v\7\o\d\7\y\m\u\1\s\h\c\n\2\1\s\o\d\h\o\7\k\u\5\9\g\b\o\0\3\t\e\l\3\c\j\m\x\d\j\a\e\8\u\q\4\6\q\m\z\k\7\q\2\v\q\l\8\2\3\2\k\i\6\x\i\z\3\j\k\b\j\m\6\f\6\x\v\t\t\f\s\3\d\k\r\8\t\m\e\f\s\h\x\f\1\d\x\e\e\g\w\7\7\f\7\3\d\v\2\a\p\l\f\m\i\9\6\g\y\2\w\8\x\r\n\k\n\j\f\8\d\s\b\8\q\g\x\5\1\o\l\f\w\q\g\j\e\a\v\t\s\x ]] 00:27:56.132 16:43:32 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:56.132 16:43:32 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:56.407 [2024-07-11 16:43:32.942944] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:56.407 [2024-07-11 16:43:32.943144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138432 ] 00:27:56.407 [2024-07-11 16:43:33.110302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.666 [2024-07-11 16:43:33.275589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.857  Copying: 512/512 [B] (average 250 kBps) 00:27:57.857 00:27:57.857 ************************************ 00:27:57.857 END TEST dd_flags_misc 00:27:57.857 ************************************ 00:27:57.858 16:43:34 -- dd/posix.sh@93 -- # [[ 5lb6ps2l8hgw3x5t68nkrs0b6b8273qe2c7f3o1gfw7ihkcppunyc40zl4louoradwl6ng58noed9leu1c75u6v5klbcsu74vvweqdgeg0pf3neprslrsed4k1jvy8pkkc88k024yrcq0gph1my5gk4zssuc4f77vdlg482c2lci76x1yc6l06m8sffemsdf68753delva03099blh48zb5bmsmbdj1mk1q08lbyzslteq30l5dv7hbvhve91vbjwjdcmb7phhlzcddeqkqdjs4bmp4gk9a5lezvjrmhcqki0e6yo9om75vo1nc6znxf5vf7loy7es5ipy7n2w7zprwrd2l6lh7ktuya68xv7od7ymu1shcn21sodho7ku59gbo03tel3cjmxdjae8uq46qmzk7q2vql8232ki6xiz3jkbjm6f6xvttfs3dkr8tmefshxf1dxeegw77f73dv2aplfmi96gy2w8xrnknjf8dsb8qgx51olfwqgjeavtsx == \5\l\b\6\p\s\2\l\8\h\g\w\3\x\5\t\6\8\n\k\r\s\0\b\6\b\8\2\7\3\q\e\2\c\7\f\3\o\1\g\f\w\7\i\h\k\c\p\p\u\n\y\c\4\0\z\l\4\l\o\u\o\r\a\d\w\l\6\n\g\5\8\n\o\e\d\9\l\e\u\1\c\7\5\u\6\v\5\k\l\b\c\s\u\7\4\v\v\w\e\q\d\g\e\g\0\p\f\3\n\e\p\r\s\l\r\s\e\d\4\k\1\j\v\y\8\p\k\k\c\8\8\k\0\2\4\y\r\c\q\0\g\p\h\1\m\y\5\g\k\4\z\s\s\u\c\4\f\7\7\v\d\l\g\4\8\2\c\2\l\c\i\7\6\x\1\y\c\6\l\0\6\m\8\s\f\f\e\m\s\d\f\6\8\7\5\3\d\e\l\v\a\0\3\0\9\9\b\l\h\4\8\z\b\5\b\m\s\m\b\d\j\1\m\k\1\q\0\8\l\b\y\z\s\l\t\e\q\3\0\l\5\d\v\7\h\b\v\h\v\e\9\1\v\b\j\w\j\d\c\m\b\7\p\h\h\l\z\c\d\d\e\q\k\q\d\j\s\4\b\m\p\4\g\k\9\a\5\l\e\z\v\j\r\m\h\c\q\k\i\0\e\6\y\o\9\o\m\7\5\v\o\1\n\c\6\z\n\x\f\5\v\f\7\l\o\y\7\e\s\5\i\p\y\7\n\2\w\7\z\p\r\w\r\d\2\l\6\l\h\7\k\t\u\y\a\6\8\x\v\7\o\d\7\y\m\u\1\s\h\c\n\2\1\s\o\d\h\o\7\k\u\5\9\g\b\o\0\3\t\e\l\3\c\j\m\x\d\j\a\e\8\u\q\4\6\q\m\z\k\7\q\2\v\q\l\8\2\3\2\k\i\6\x\i\z\3\j\k\b\j\m\6\f\6\x\v\t\t\f\s\3\d\k\r\8\t\m\e\f\s\h\x\f\1\d\x\e\e\g\w\7\7\f\7\3\d\v\2\a\p\l\f\m\i\9\6\g\y\2\w\8\x\r\n\k\n\j\f\8\d\s\b\8\q\g\x\5\1\o\l\f\w\q\g\j\e\a\v\t\s\x ]] 00:27:57.858 00:27:57.858 real 0m12.552s 00:27:57.858 user 0m9.786s 00:27:57.858 sys 0m1.667s 00:27:57.858 16:43:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:57.858 16:43:34 -- common/autotest_common.sh@10 -- # set +x 00:27:57.858 16:43:34 -- dd/posix.sh@131 -- # tests_forced_aio 00:27:57.858 16:43:34 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:27:57.858 * Second test run, using AIO 00:27:57.858 16:43:34 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:27:57.858 16:43:34 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:27:57.858 16:43:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:57.858 16:43:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:57.858 16:43:34 -- common/autotest_common.sh@10 -- # set +x 00:27:57.858 ************************************ 00:27:57.858 START TEST dd_flag_append_forced_aio 00:27:57.858 ************************************ 00:27:57.858 16:43:34 -- common/autotest_common.sh@1104 -- # append 00:27:57.858 16:43:34 -- dd/posix.sh@16 -- # local dump0 00:27:57.858 16:43:34 -- dd/posix.sh@17 -- # local dump1 00:27:57.858 16:43:34 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:57.858 16:43:34 -- dd/common.sh@98 -- # xtrace_disable 00:27:57.858 16:43:34 -- common/autotest_common.sh@10 -- # set +x 00:27:57.858 16:43:34 -- dd/posix.sh@19 -- # dump0=i5ggs1z5q386m0g2ga7zxyli26o0qjpv 00:27:57.858 16:43:34 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:57.858 16:43:34 -- dd/common.sh@98 -- # xtrace_disable 00:27:57.858 16:43:34 -- common/autotest_common.sh@10 -- # set +x 00:27:57.858 16:43:34 -- dd/posix.sh@20 -- # dump1=ab93oso5ivh2p6z10lkmct83z2jvebd9 00:27:57.858 16:43:34 -- dd/posix.sh@22 -- # printf %s i5ggs1z5q386m0g2ga7zxyli26o0qjpv 00:27:57.858 16:43:34 -- dd/posix.sh@23 -- # printf %s ab93oso5ivh2p6z10lkmct83z2jvebd9 00:27:57.858 16:43:34 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:57.858 [2024-07-11 16:43:34.584109] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:57.858 [2024-07-11 16:43:34.584314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138470 ] 00:27:58.116 [2024-07-11 16:43:34.748426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.373 [2024-07-11 16:43:34.925313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.307  Copying: 32/32 [B] (average 31 kBps) 00:27:59.307 00:27:59.307 16:43:36 -- dd/posix.sh@27 -- # [[ ab93oso5ivh2p6z10lkmct83z2jvebd9i5ggs1z5q386m0g2ga7zxyli26o0qjpv == \a\b\9\3\o\s\o\5\i\v\h\2\p\6\z\1\0\l\k\m\c\t\8\3\z\2\j\v\e\b\d\9\i\5\g\g\s\1\z\5\q\3\8\6\m\0\g\2\g\a\7\z\x\y\l\i\2\6\o\0\q\j\p\v ]] 00:27:59.307 00:27:59.307 real 0m1.595s 00:27:59.307 user 0m1.221s 00:27:59.307 sys 0m0.237s 00:27:59.307 16:43:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:59.307 ************************************ 00:27:59.307 END TEST dd_flag_append_forced_aio 00:27:59.307 ************************************ 00:27:59.307 16:43:36 -- common/autotest_common.sh@10 -- # set +x 00:27:59.565 16:43:36 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:27:59.565 16:43:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:59.565 16:43:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:59.565 16:43:36 -- common/autotest_common.sh@10 -- # set +x 00:27:59.565 ************************************ 00:27:59.565 START TEST dd_flag_directory_forced_aio 00:27:59.565 ************************************ 00:27:59.565 16:43:36 -- common/autotest_common.sh@1104 -- # directory 00:27:59.565 16:43:36 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:59.565 16:43:36 -- common/autotest_common.sh@640 -- # local es=0 00:27:59.565 16:43:36 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:59.565 16:43:36 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:59.565 16:43:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:59.565 16:43:36 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:59.565 16:43:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:59.565 16:43:36 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:59.565 16:43:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:59.565 16:43:36 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:59.565 16:43:36 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:59.565 16:43:36 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:59.565 [2024-07-11 16:43:36.222800] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:59.565 [2024-07-11 16:43:36.222997] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138524 ] 00:27:59.823 [2024-07-11 16:43:36.386561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.823 [2024-07-11 16:43:36.550789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.081 [2024-07-11 16:43:36.798680] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:00.081 [2024-07-11 16:43:36.798745] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:00.081 [2024-07-11 16:43:36.798789] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:00.649 [2024-07-11 16:43:37.387679] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:00.908 16:43:37 -- common/autotest_common.sh@643 -- # es=236 00:28:00.908 16:43:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:00.908 16:43:37 -- common/autotest_common.sh@652 -- # es=108 00:28:00.908 16:43:37 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:00.908 16:43:37 -- common/autotest_common.sh@660 -- # es=1 00:28:00.908 16:43:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:00.908 16:43:37 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:00.908 16:43:37 -- common/autotest_common.sh@640 -- # local es=0 00:28:00.908 16:43:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:00.908 16:43:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.908 16:43:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:00.908 16:43:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.908 16:43:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:00.908 16:43:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.908 16:43:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:00.908 16:43:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:00.908 16:43:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:01.166 16:43:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:28:01.166 [2024-07-11 16:43:37.773706] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:01.166 [2024-07-11 16:43:37.773917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138544 ] 00:28:01.166 [2024-07-11 16:43:37.937024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.463 [2024-07-11 16:43:38.093378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.741 [2024-07-11 16:43:38.338679] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:01.741 [2024-07-11 16:43:38.338756] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:28:01.741 [2024-07-11 16:43:38.338799] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:02.309 [2024-07-11 16:43:38.916611] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:02.568 16:43:39 -- common/autotest_common.sh@643 -- # es=236 00:28:02.568 16:43:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:02.568 16:43:39 -- common/autotest_common.sh@652 -- # es=108 00:28:02.568 16:43:39 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:02.568 16:43:39 -- common/autotest_common.sh@660 -- # es=1 00:28:02.568 16:43:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:02.568 00:28:02.568 real 0m3.078s 00:28:02.568 user 0m2.439s 00:28:02.568 sys 0m0.419s 00:28:02.568 16:43:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.568 16:43:39 -- common/autotest_common.sh@10 -- # set +x 00:28:02.568 ************************************ 00:28:02.568 END TEST dd_flag_directory_forced_aio 00:28:02.568 ************************************ 00:28:02.568 16:43:39 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:28:02.568 16:43:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:02.568 16:43:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:02.568 16:43:39 -- common/autotest_common.sh@10 -- # set +x 00:28:02.568 ************************************ 00:28:02.568 START TEST dd_flag_nofollow_forced_aio 00:28:02.568 ************************************ 00:28:02.568 16:43:39 -- common/autotest_common.sh@1104 -- # nofollow 00:28:02.568 16:43:39 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:02.568 16:43:39 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:02.568 16:43:39 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:02.568 16:43:39 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:02.568 16:43:39 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:02.568 16:43:39 -- common/autotest_common.sh@640 -- # local es=0 00:28:02.568 16:43:39 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:02.568 16:43:39 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.568 16:43:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.568 16:43:39 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.568 16:43:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.568 16:43:39 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.568 16:43:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:02.568 16:43:39 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:02.568 16:43:39 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:02.568 16:43:39 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:02.568 [2024-07-11 16:43:39.360165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:02.568 [2024-07-11 16:43:39.360356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138589 ] 00:28:02.826 [2024-07-11 16:43:39.526790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.085 [2024-07-11 16:43:39.685223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.344 [2024-07-11 16:43:39.942351] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:03.344 [2024-07-11 16:43:39.942454] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:28:03.344 [2024-07-11 16:43:39.942494] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:03.910 [2024-07-11 16:43:40.511435] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:04.169 16:43:40 -- common/autotest_common.sh@643 -- # es=216 00:28:04.169 16:43:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:04.169 16:43:40 -- common/autotest_common.sh@652 -- # es=88 00:28:04.169 16:43:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:04.169 16:43:40 -- common/autotest_common.sh@660 -- # es=1 00:28:04.169 16:43:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:04.169 16:43:40 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:04.169 16:43:40 -- common/autotest_common.sh@640 -- # local es=0 00:28:04.169 16:43:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:04.169 16:43:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:04.169 16:43:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:04.169 16:43:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:04.169 16:43:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:04.169 16:43:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:04.169 16:43:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:04.169 16:43:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:04.169 16:43:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:04.169 16:43:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:28:04.169 [2024-07-11 16:43:40.898537] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:04.169 [2024-07-11 16:43:40.898727] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138616 ] 00:28:04.426 [2024-07-11 16:43:41.061002] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.684 [2024-07-11 16:43:41.240251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.942 [2024-07-11 16:43:41.494700] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:04.942 [2024-07-11 16:43:41.494771] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:28:04.942 [2024-07-11 16:43:41.494813] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:05.508 [2024-07-11 16:43:42.064794] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:28:05.766 16:43:42 -- common/autotest_common.sh@643 -- # es=216 00:28:05.766 16:43:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:05.766 16:43:42 -- common/autotest_common.sh@652 -- # es=88 00:28:05.766 16:43:42 -- common/autotest_common.sh@653 -- # case "$es" in 00:28:05.766 16:43:42 -- common/autotest_common.sh@660 -- # es=1 00:28:05.767 16:43:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:05.767 16:43:42 -- dd/posix.sh@46 -- # gen_bytes 512 00:28:05.767 16:43:42 -- dd/common.sh@98 -- # xtrace_disable 00:28:05.767 16:43:42 -- common/autotest_common.sh@10 -- # set +x 00:28:05.767 16:43:42 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:05.767 [2024-07-11 16:43:42.454848] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:05.767 [2024-07-11 16:43:42.455045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138655 ] 00:28:06.025 [2024-07-11 16:43:42.621744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.025 [2024-07-11 16:43:42.783570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.216  Copying: 512/512 [B] (average 500 kBps) 00:28:07.216 00:28:07.216 16:43:43 -- dd/posix.sh@49 -- # [[ o9j7ei0f20vzwn0w99nsi7l6foorgf1slbh3p8qw4ol6a7yp4f5v1x6e7qrch6y08hhipv4xsjltu72axyg4tfl68q97b1cmbhuj1iwqvw09v6rcan51p6dliv4s3gcrwsd5v2jthcv1hd3n8zerexjuoyq66tf74nxbw9ykk8pryqmeg2zersmdg0j82bw2wu0c3foqep9ucq5n4btluhupgsunagy13olo1o5180gnava5snosbj0qpqdfnnhq3tw811doff57uar9rc63i1n6grnkunhjzn5tvoo7e4o5pzyv8t34diag1thlyegma5ry2rqj7s8pab0taf377dqvmy7zmi0243r1vibadjbj99wa60d623936b5588tcg3sz4smair1mvrqdeq63l0f7wyueyilihxiwi6yyyik9fxtsl8bxks5cyzhrybqtgocwpdegay7y9f76u40dt6hsyuwpg92fdkima2rcatvje13y1t27koqrocoel8ih == \o\9\j\7\e\i\0\f\2\0\v\z\w\n\0\w\9\9\n\s\i\7\l\6\f\o\o\r\g\f\1\s\l\b\h\3\p\8\q\w\4\o\l\6\a\7\y\p\4\f\5\v\1\x\6\e\7\q\r\c\h\6\y\0\8\h\h\i\p\v\4\x\s\j\l\t\u\7\2\a\x\y\g\4\t\f\l\6\8\q\9\7\b\1\c\m\b\h\u\j\1\i\w\q\v\w\0\9\v\6\r\c\a\n\5\1\p\6\d\l\i\v\4\s\3\g\c\r\w\s\d\5\v\2\j\t\h\c\v\1\h\d\3\n\8\z\e\r\e\x\j\u\o\y\q\6\6\t\f\7\4\n\x\b\w\9\y\k\k\8\p\r\y\q\m\e\g\2\z\e\r\s\m\d\g\0\j\8\2\b\w\2\w\u\0\c\3\f\o\q\e\p\9\u\c\q\5\n\4\b\t\l\u\h\u\p\g\s\u\n\a\g\y\1\3\o\l\o\1\o\5\1\8\0\g\n\a\v\a\5\s\n\o\s\b\j\0\q\p\q\d\f\n\n\h\q\3\t\w\8\1\1\d\o\f\f\5\7\u\a\r\9\r\c\6\3\i\1\n\6\g\r\n\k\u\n\h\j\z\n\5\t\v\o\o\7\e\4\o\5\p\z\y\v\8\t\3\4\d\i\a\g\1\t\h\l\y\e\g\m\a\5\r\y\2\r\q\j\7\s\8\p\a\b\0\t\a\f\3\7\7\d\q\v\m\y\7\z\m\i\0\2\4\3\r\1\v\i\b\a\d\j\b\j\9\9\w\a\6\0\d\6\2\3\9\3\6\b\5\5\8\8\t\c\g\3\s\z\4\s\m\a\i\r\1\m\v\r\q\d\e\q\6\3\l\0\f\7\w\y\u\e\y\i\l\i\h\x\i\w\i\6\y\y\y\i\k\9\f\x\t\s\l\8\b\x\k\s\5\c\y\z\h\r\y\b\q\t\g\o\c\w\p\d\e\g\a\y\7\y\9\f\7\6\u\4\0\d\t\6\h\s\y\u\w\p\g\9\2\f\d\k\i\m\a\2\r\c\a\t\v\j\e\1\3\y\1\t\2\7\k\o\q\r\o\c\o\e\l\8\i\h ]] 00:28:07.216 00:28:07.216 real 0m4.693s 00:28:07.216 user 0m3.679s 00:28:07.216 sys 0m0.663s 00:28:07.216 16:43:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:07.216 ************************************ 00:28:07.216 END TEST dd_flag_nofollow_forced_aio 00:28:07.216 16:43:43 -- common/autotest_common.sh@10 -- # set +x 00:28:07.216 ************************************ 00:28:07.216 16:43:44 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:28:07.216 16:43:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:07.216 16:43:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:07.216 16:43:44 -- common/autotest_common.sh@10 -- # set +x 00:28:07.473 ************************************ 00:28:07.473 START TEST dd_flag_noatime_forced_aio 00:28:07.473 ************************************ 00:28:07.473 16:43:44 -- common/autotest_common.sh@1104 -- # noatime 00:28:07.473 16:43:44 -- dd/posix.sh@53 -- # local atime_if 00:28:07.473 16:43:44 -- dd/posix.sh@54 -- # local atime_of 00:28:07.473 16:43:44 -- dd/posix.sh@58 -- # gen_bytes 512 00:28:07.473 16:43:44 -- dd/common.sh@98 -- # xtrace_disable 00:28:07.473 16:43:44 -- common/autotest_common.sh@10 -- # set +x 00:28:07.473 16:43:44 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:07.473 16:43:44 -- dd/posix.sh@60 -- # atime_if=1720716223 00:28:07.473 16:43:44 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:07.473 16:43:44 -- dd/posix.sh@61 -- # atime_of=1720716223 00:28:07.473 16:43:44 -- dd/posix.sh@66 -- # sleep 1 00:28:08.408 16:43:45 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:08.408 [2024-07-11 16:43:45.114333] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:08.408 [2024-07-11 16:43:45.114540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138719 ] 00:28:08.666 [2024-07-11 16:43:45.281836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.666 [2024-07-11 16:43:45.445461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.859  Copying: 512/512 [B] (average 500 kBps) 00:28:09.859 00:28:09.859 16:43:46 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:09.859 16:43:46 -- dd/posix.sh@69 -- # (( atime_if == 1720716223 )) 00:28:09.859 16:43:46 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:09.859 16:43:46 -- dd/posix.sh@70 -- # (( atime_of == 1720716223 )) 00:28:09.859 16:43:46 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:10.118 [2024-07-11 16:43:46.696882] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:10.118 [2024-07-11 16:43:46.697081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138739 ] 00:28:10.118 [2024-07-11 16:43:46.862988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.377 [2024-07-11 16:43:47.033597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.571  Copying: 512/512 [B] (average 500 kBps) 00:28:11.571 00:28:11.571 16:43:48 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:11.571 16:43:48 -- dd/posix.sh@73 -- # (( atime_if < 1720716227 )) 00:28:11.571 00:28:11.571 real 0m4.187s 00:28:11.571 user 0m2.449s 00:28:11.571 sys 0m0.469s 00:28:11.571 16:43:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:11.571 16:43:48 -- common/autotest_common.sh@10 -- # set +x 00:28:11.571 ************************************ 00:28:11.571 END TEST dd_flag_noatime_forced_aio 00:28:11.571 ************************************ 00:28:11.571 16:43:48 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:28:11.571 16:43:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:11.571 16:43:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:11.571 16:43:48 -- common/autotest_common.sh@10 -- # set +x 00:28:11.571 ************************************ 00:28:11.571 START TEST dd_flags_misc_forced_aio 00:28:11.571 ************************************ 00:28:11.571 16:43:48 -- common/autotest_common.sh@1104 -- # io 00:28:11.571 16:43:48 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:28:11.571 16:43:48 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:28:11.571 16:43:48 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:28:11.571 16:43:48 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:11.571 16:43:48 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:11.571 16:43:48 -- dd/common.sh@98 -- # xtrace_disable 00:28:11.571 16:43:48 -- common/autotest_common.sh@10 -- # set +x 00:28:11.571 16:43:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:11.571 16:43:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:11.571 [2024-07-11 16:43:48.342158] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:11.571 [2024-07-11 16:43:48.342357] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138782 ] 00:28:11.830 [2024-07-11 16:43:48.509932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.088 [2024-07-11 16:43:48.677979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.282  Copying: 512/512 [B] (average 250 kBps) 00:28:13.282 00:28:13.283 16:43:49 -- dd/posix.sh@93 -- # [[ 88cws1sqn0jpdcvj4id2pbdqxmftheyru3yuh1lnwwf50jfh8h88n45ii0zyu7vq3yuedeb0pld2m0sw49ecvfjo924xth066ki34q5cu2152jz8p971za13c4rlzauujhq8ksycvx0vjnb7gt5h47pto9iz9cg9lxmancpcz52lhg69uu8va5xjfznyk88li81bvq7gqoinlh8gu32koc9n9whoaa4gqwrzb7lvslp2aovgj1i0j9i6irymmcl0z13di7h2cpt10mxr621cla5bn9lx6o7el4ocggrzvypqtfv1oh83p9i3fp7z44x98lo8sz8zomc35w80ztwe4tfsxz6y1xiakmn2xczbqjsckhf0uerzxx1hby25olrx1f4v2w9bf749ljxqqb00e67jcm2qwz7ktek5pxuz62uca5zbjjoy6rczeas1fr97pg4ngkjomt3d8wkmr6pllx7j2rzu628eoohvqurga5fj8i3uukf9akm895ssiy5p == \8\8\c\w\s\1\s\q\n\0\j\p\d\c\v\j\4\i\d\2\p\b\d\q\x\m\f\t\h\e\y\r\u\3\y\u\h\1\l\n\w\w\f\5\0\j\f\h\8\h\8\8\n\4\5\i\i\0\z\y\u\7\v\q\3\y\u\e\d\e\b\0\p\l\d\2\m\0\s\w\4\9\e\c\v\f\j\o\9\2\4\x\t\h\0\6\6\k\i\3\4\q\5\c\u\2\1\5\2\j\z\8\p\9\7\1\z\a\1\3\c\4\r\l\z\a\u\u\j\h\q\8\k\s\y\c\v\x\0\v\j\n\b\7\g\t\5\h\4\7\p\t\o\9\i\z\9\c\g\9\l\x\m\a\n\c\p\c\z\5\2\l\h\g\6\9\u\u\8\v\a\5\x\j\f\z\n\y\k\8\8\l\i\8\1\b\v\q\7\g\q\o\i\n\l\h\8\g\u\3\2\k\o\c\9\n\9\w\h\o\a\a\4\g\q\w\r\z\b\7\l\v\s\l\p\2\a\o\v\g\j\1\i\0\j\9\i\6\i\r\y\m\m\c\l\0\z\1\3\d\i\7\h\2\c\p\t\1\0\m\x\r\6\2\1\c\l\a\5\b\n\9\l\x\6\o\7\e\l\4\o\c\g\g\r\z\v\y\p\q\t\f\v\1\o\h\8\3\p\9\i\3\f\p\7\z\4\4\x\9\8\l\o\8\s\z\8\z\o\m\c\3\5\w\8\0\z\t\w\e\4\t\f\s\x\z\6\y\1\x\i\a\k\m\n\2\x\c\z\b\q\j\s\c\k\h\f\0\u\e\r\z\x\x\1\h\b\y\2\5\o\l\r\x\1\f\4\v\2\w\9\b\f\7\4\9\l\j\x\q\q\b\0\0\e\6\7\j\c\m\2\q\w\z\7\k\t\e\k\5\p\x\u\z\6\2\u\c\a\5\z\b\j\j\o\y\6\r\c\z\e\a\s\1\f\r\9\7\p\g\4\n\g\k\j\o\m\t\3\d\8\w\k\m\r\6\p\l\l\x\7\j\2\r\z\u\6\2\8\e\o\o\h\v\q\u\r\g\a\5\f\j\8\i\3\u\u\k\f\9\a\k\m\8\9\5\s\s\i\y\5\p ]] 00:28:13.283 16:43:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:13.283 16:43:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:13.283 [2024-07-11 16:43:49.918842] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:13.283 [2024-07-11 16:43:49.919013] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138806 ] 00:28:13.283 [2024-07-11 16:43:50.078574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.541 [2024-07-11 16:43:50.232497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.736  Copying: 512/512 [B] (average 500 kBps) 00:28:14.736 00:28:14.736 16:43:51 -- dd/posix.sh@93 -- # [[ 88cws1sqn0jpdcvj4id2pbdqxmftheyru3yuh1lnwwf50jfh8h88n45ii0zyu7vq3yuedeb0pld2m0sw49ecvfjo924xth066ki34q5cu2152jz8p971za13c4rlzauujhq8ksycvx0vjnb7gt5h47pto9iz9cg9lxmancpcz52lhg69uu8va5xjfznyk88li81bvq7gqoinlh8gu32koc9n9whoaa4gqwrzb7lvslp2aovgj1i0j9i6irymmcl0z13di7h2cpt10mxr621cla5bn9lx6o7el4ocggrzvypqtfv1oh83p9i3fp7z44x98lo8sz8zomc35w80ztwe4tfsxz6y1xiakmn2xczbqjsckhf0uerzxx1hby25olrx1f4v2w9bf749ljxqqb00e67jcm2qwz7ktek5pxuz62uca5zbjjoy6rczeas1fr97pg4ngkjomt3d8wkmr6pllx7j2rzu628eoohvqurga5fj8i3uukf9akm895ssiy5p == \8\8\c\w\s\1\s\q\n\0\j\p\d\c\v\j\4\i\d\2\p\b\d\q\x\m\f\t\h\e\y\r\u\3\y\u\h\1\l\n\w\w\f\5\0\j\f\h\8\h\8\8\n\4\5\i\i\0\z\y\u\7\v\q\3\y\u\e\d\e\b\0\p\l\d\2\m\0\s\w\4\9\e\c\v\f\j\o\9\2\4\x\t\h\0\6\6\k\i\3\4\q\5\c\u\2\1\5\2\j\z\8\p\9\7\1\z\a\1\3\c\4\r\l\z\a\u\u\j\h\q\8\k\s\y\c\v\x\0\v\j\n\b\7\g\t\5\h\4\7\p\t\o\9\i\z\9\c\g\9\l\x\m\a\n\c\p\c\z\5\2\l\h\g\6\9\u\u\8\v\a\5\x\j\f\z\n\y\k\8\8\l\i\8\1\b\v\q\7\g\q\o\i\n\l\h\8\g\u\3\2\k\o\c\9\n\9\w\h\o\a\a\4\g\q\w\r\z\b\7\l\v\s\l\p\2\a\o\v\g\j\1\i\0\j\9\i\6\i\r\y\m\m\c\l\0\z\1\3\d\i\7\h\2\c\p\t\1\0\m\x\r\6\2\1\c\l\a\5\b\n\9\l\x\6\o\7\e\l\4\o\c\g\g\r\z\v\y\p\q\t\f\v\1\o\h\8\3\p\9\i\3\f\p\7\z\4\4\x\9\8\l\o\8\s\z\8\z\o\m\c\3\5\w\8\0\z\t\w\e\4\t\f\s\x\z\6\y\1\x\i\a\k\m\n\2\x\c\z\b\q\j\s\c\k\h\f\0\u\e\r\z\x\x\1\h\b\y\2\5\o\l\r\x\1\f\4\v\2\w\9\b\f\7\4\9\l\j\x\q\q\b\0\0\e\6\7\j\c\m\2\q\w\z\7\k\t\e\k\5\p\x\u\z\6\2\u\c\a\5\z\b\j\j\o\y\6\r\c\z\e\a\s\1\f\r\9\7\p\g\4\n\g\k\j\o\m\t\3\d\8\w\k\m\r\6\p\l\l\x\7\j\2\r\z\u\6\2\8\e\o\o\h\v\q\u\r\g\a\5\f\j\8\i\3\u\u\k\f\9\a\k\m\8\9\5\s\s\i\y\5\p ]] 00:28:14.736 16:43:51 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:14.736 16:43:51 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:14.736 [2024-07-11 16:43:51.497690] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:14.736 [2024-07-11 16:43:51.497882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138832 ] 00:28:14.995 [2024-07-11 16:43:51.666892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.253 [2024-07-11 16:43:51.834870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.458  Copying: 512/512 [B] (average 166 kBps) 00:28:16.458 00:28:16.458 16:43:53 -- dd/posix.sh@93 -- # [[ 88cws1sqn0jpdcvj4id2pbdqxmftheyru3yuh1lnwwf50jfh8h88n45ii0zyu7vq3yuedeb0pld2m0sw49ecvfjo924xth066ki34q5cu2152jz8p971za13c4rlzauujhq8ksycvx0vjnb7gt5h47pto9iz9cg9lxmancpcz52lhg69uu8va5xjfznyk88li81bvq7gqoinlh8gu32koc9n9whoaa4gqwrzb7lvslp2aovgj1i0j9i6irymmcl0z13di7h2cpt10mxr621cla5bn9lx6o7el4ocggrzvypqtfv1oh83p9i3fp7z44x98lo8sz8zomc35w80ztwe4tfsxz6y1xiakmn2xczbqjsckhf0uerzxx1hby25olrx1f4v2w9bf749ljxqqb00e67jcm2qwz7ktek5pxuz62uca5zbjjoy6rczeas1fr97pg4ngkjomt3d8wkmr6pllx7j2rzu628eoohvqurga5fj8i3uukf9akm895ssiy5p == \8\8\c\w\s\1\s\q\n\0\j\p\d\c\v\j\4\i\d\2\p\b\d\q\x\m\f\t\h\e\y\r\u\3\y\u\h\1\l\n\w\w\f\5\0\j\f\h\8\h\8\8\n\4\5\i\i\0\z\y\u\7\v\q\3\y\u\e\d\e\b\0\p\l\d\2\m\0\s\w\4\9\e\c\v\f\j\o\9\2\4\x\t\h\0\6\6\k\i\3\4\q\5\c\u\2\1\5\2\j\z\8\p\9\7\1\z\a\1\3\c\4\r\l\z\a\u\u\j\h\q\8\k\s\y\c\v\x\0\v\j\n\b\7\g\t\5\h\4\7\p\t\o\9\i\z\9\c\g\9\l\x\m\a\n\c\p\c\z\5\2\l\h\g\6\9\u\u\8\v\a\5\x\j\f\z\n\y\k\8\8\l\i\8\1\b\v\q\7\g\q\o\i\n\l\h\8\g\u\3\2\k\o\c\9\n\9\w\h\o\a\a\4\g\q\w\r\z\b\7\l\v\s\l\p\2\a\o\v\g\j\1\i\0\j\9\i\6\i\r\y\m\m\c\l\0\z\1\3\d\i\7\h\2\c\p\t\1\0\m\x\r\6\2\1\c\l\a\5\b\n\9\l\x\6\o\7\e\l\4\o\c\g\g\r\z\v\y\p\q\t\f\v\1\o\h\8\3\p\9\i\3\f\p\7\z\4\4\x\9\8\l\o\8\s\z\8\z\o\m\c\3\5\w\8\0\z\t\w\e\4\t\f\s\x\z\6\y\1\x\i\a\k\m\n\2\x\c\z\b\q\j\s\c\k\h\f\0\u\e\r\z\x\x\1\h\b\y\2\5\o\l\r\x\1\f\4\v\2\w\9\b\f\7\4\9\l\j\x\q\q\b\0\0\e\6\7\j\c\m\2\q\w\z\7\k\t\e\k\5\p\x\u\z\6\2\u\c\a\5\z\b\j\j\o\y\6\r\c\z\e\a\s\1\f\r\9\7\p\g\4\n\g\k\j\o\m\t\3\d\8\w\k\m\r\6\p\l\l\x\7\j\2\r\z\u\6\2\8\e\o\o\h\v\q\u\r\g\a\5\f\j\8\i\3\u\u\k\f\9\a\k\m\8\9\5\s\s\i\y\5\p ]] 00:28:16.458 16:43:53 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:16.458 16:43:53 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:16.458 [2024-07-11 16:43:53.073783] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:16.458 [2024-07-11 16:43:53.073930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138868 ] 00:28:16.458 [2024-07-11 16:43:53.223985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.715 [2024-07-11 16:43:53.382779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.906  Copying: 512/512 [B] (average 250 kBps) 00:28:17.906 00:28:17.906 16:43:54 -- dd/posix.sh@93 -- # [[ 88cws1sqn0jpdcvj4id2pbdqxmftheyru3yuh1lnwwf50jfh8h88n45ii0zyu7vq3yuedeb0pld2m0sw49ecvfjo924xth066ki34q5cu2152jz8p971za13c4rlzauujhq8ksycvx0vjnb7gt5h47pto9iz9cg9lxmancpcz52lhg69uu8va5xjfznyk88li81bvq7gqoinlh8gu32koc9n9whoaa4gqwrzb7lvslp2aovgj1i0j9i6irymmcl0z13di7h2cpt10mxr621cla5bn9lx6o7el4ocggrzvypqtfv1oh83p9i3fp7z44x98lo8sz8zomc35w80ztwe4tfsxz6y1xiakmn2xczbqjsckhf0uerzxx1hby25olrx1f4v2w9bf749ljxqqb00e67jcm2qwz7ktek5pxuz62uca5zbjjoy6rczeas1fr97pg4ngkjomt3d8wkmr6pllx7j2rzu628eoohvqurga5fj8i3uukf9akm895ssiy5p == \8\8\c\w\s\1\s\q\n\0\j\p\d\c\v\j\4\i\d\2\p\b\d\q\x\m\f\t\h\e\y\r\u\3\y\u\h\1\l\n\w\w\f\5\0\j\f\h\8\h\8\8\n\4\5\i\i\0\z\y\u\7\v\q\3\y\u\e\d\e\b\0\p\l\d\2\m\0\s\w\4\9\e\c\v\f\j\o\9\2\4\x\t\h\0\6\6\k\i\3\4\q\5\c\u\2\1\5\2\j\z\8\p\9\7\1\z\a\1\3\c\4\r\l\z\a\u\u\j\h\q\8\k\s\y\c\v\x\0\v\j\n\b\7\g\t\5\h\4\7\p\t\o\9\i\z\9\c\g\9\l\x\m\a\n\c\p\c\z\5\2\l\h\g\6\9\u\u\8\v\a\5\x\j\f\z\n\y\k\8\8\l\i\8\1\b\v\q\7\g\q\o\i\n\l\h\8\g\u\3\2\k\o\c\9\n\9\w\h\o\a\a\4\g\q\w\r\z\b\7\l\v\s\l\p\2\a\o\v\g\j\1\i\0\j\9\i\6\i\r\y\m\m\c\l\0\z\1\3\d\i\7\h\2\c\p\t\1\0\m\x\r\6\2\1\c\l\a\5\b\n\9\l\x\6\o\7\e\l\4\o\c\g\g\r\z\v\y\p\q\t\f\v\1\o\h\8\3\p\9\i\3\f\p\7\z\4\4\x\9\8\l\o\8\s\z\8\z\o\m\c\3\5\w\8\0\z\t\w\e\4\t\f\s\x\z\6\y\1\x\i\a\k\m\n\2\x\c\z\b\q\j\s\c\k\h\f\0\u\e\r\z\x\x\1\h\b\y\2\5\o\l\r\x\1\f\4\v\2\w\9\b\f\7\4\9\l\j\x\q\q\b\0\0\e\6\7\j\c\m\2\q\w\z\7\k\t\e\k\5\p\x\u\z\6\2\u\c\a\5\z\b\j\j\o\y\6\r\c\z\e\a\s\1\f\r\9\7\p\g\4\n\g\k\j\o\m\t\3\d\8\w\k\m\r\6\p\l\l\x\7\j\2\r\z\u\6\2\8\e\o\o\h\v\q\u\r\g\a\5\f\j\8\i\3\u\u\k\f\9\a\k\m\8\9\5\s\s\i\y\5\p ]] 00:28:17.906 16:43:54 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:17.906 16:43:54 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:17.906 16:43:54 -- dd/common.sh@98 -- # xtrace_disable 00:28:17.906 16:43:54 -- common/autotest_common.sh@10 -- # set +x 00:28:17.906 16:43:54 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:17.906 16:43:54 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:17.906 [2024-07-11 16:43:54.652400] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:17.906 [2024-07-11 16:43:54.652609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138892 ] 00:28:18.164 [2024-07-11 16:43:54.818334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.422 [2024-07-11 16:43:54.995705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.613  Copying: 512/512 [B] (average 500 kBps) 00:28:19.614 00:28:19.614 16:43:56 -- dd/posix.sh@93 -- # [[ uwclil9jlt0k7vcherbyp46g86mvn0nqu3emvqnnpl4bu1jknca13gx8te2v3dstufn6wtc4ppjk3fya9eopq6xkb5vj7gznsccllczywaj16tpmoj0rfyabv08py3dyu7xrlc9zjuuagtcwjqnr0895exyiaki47anqwgfxbz32cjdj0mn815u7ji4f0ldya38xpu1q9chgu85c6fbbbqvu3zyyyqnt8aj45uf5x3jh6b0i14m0gxtjh9krjmy0fo0jnwa8qarh292js93kz79xyprq7ut8nzcimkp7z632etxxanh5mhmwlyolaa15zqagd96irc9k7a6jufrsafjnbnasp3g9fcadiwv8e0g33vxxnkwz9x14m1h85hiqaz0zzctxv6k8zdkuyqfrprsbhhe2bs6bx428i00547hl1327rnwdqyvv7cerwpkwgpvor3041xvss357a7czmic5gz9j59lgd6ekhu3zzrn9tl30qtz35skvqlnemna6 == \u\w\c\l\i\l\9\j\l\t\0\k\7\v\c\h\e\r\b\y\p\4\6\g\8\6\m\v\n\0\n\q\u\3\e\m\v\q\n\n\p\l\4\b\u\1\j\k\n\c\a\1\3\g\x\8\t\e\2\v\3\d\s\t\u\f\n\6\w\t\c\4\p\p\j\k\3\f\y\a\9\e\o\p\q\6\x\k\b\5\v\j\7\g\z\n\s\c\c\l\l\c\z\y\w\a\j\1\6\t\p\m\o\j\0\r\f\y\a\b\v\0\8\p\y\3\d\y\u\7\x\r\l\c\9\z\j\u\u\a\g\t\c\w\j\q\n\r\0\8\9\5\e\x\y\i\a\k\i\4\7\a\n\q\w\g\f\x\b\z\3\2\c\j\d\j\0\m\n\8\1\5\u\7\j\i\4\f\0\l\d\y\a\3\8\x\p\u\1\q\9\c\h\g\u\8\5\c\6\f\b\b\b\q\v\u\3\z\y\y\y\q\n\t\8\a\j\4\5\u\f\5\x\3\j\h\6\b\0\i\1\4\m\0\g\x\t\j\h\9\k\r\j\m\y\0\f\o\0\j\n\w\a\8\q\a\r\h\2\9\2\j\s\9\3\k\z\7\9\x\y\p\r\q\7\u\t\8\n\z\c\i\m\k\p\7\z\6\3\2\e\t\x\x\a\n\h\5\m\h\m\w\l\y\o\l\a\a\1\5\z\q\a\g\d\9\6\i\r\c\9\k\7\a\6\j\u\f\r\s\a\f\j\n\b\n\a\s\p\3\g\9\f\c\a\d\i\w\v\8\e\0\g\3\3\v\x\x\n\k\w\z\9\x\1\4\m\1\h\8\5\h\i\q\a\z\0\z\z\c\t\x\v\6\k\8\z\d\k\u\y\q\f\r\p\r\s\b\h\h\e\2\b\s\6\b\x\4\2\8\i\0\0\5\4\7\h\l\1\3\2\7\r\n\w\d\q\y\v\v\7\c\e\r\w\p\k\w\g\p\v\o\r\3\0\4\1\x\v\s\s\3\5\7\a\7\c\z\m\i\c\5\g\z\9\j\5\9\l\g\d\6\e\k\h\u\3\z\z\r\n\9\t\l\3\0\q\t\z\3\5\s\k\v\q\l\n\e\m\n\a\6 ]] 00:28:19.614 16:43:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:19.614 16:43:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:19.614 [2024-07-11 16:43:56.248334] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:19.614 [2024-07-11 16:43:56.248541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138920 ] 00:28:19.614 [2024-07-11 16:43:56.417615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.872 [2024-07-11 16:43:56.570849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.092  Copying: 512/512 [B] (average 500 kBps) 00:28:21.092 00:28:21.093 16:43:57 -- dd/posix.sh@93 -- # [[ uwclil9jlt0k7vcherbyp46g86mvn0nqu3emvqnnpl4bu1jknca13gx8te2v3dstufn6wtc4ppjk3fya9eopq6xkb5vj7gznsccllczywaj16tpmoj0rfyabv08py3dyu7xrlc9zjuuagtcwjqnr0895exyiaki47anqwgfxbz32cjdj0mn815u7ji4f0ldya38xpu1q9chgu85c6fbbbqvu3zyyyqnt8aj45uf5x3jh6b0i14m0gxtjh9krjmy0fo0jnwa8qarh292js93kz79xyprq7ut8nzcimkp7z632etxxanh5mhmwlyolaa15zqagd96irc9k7a6jufrsafjnbnasp3g9fcadiwv8e0g33vxxnkwz9x14m1h85hiqaz0zzctxv6k8zdkuyqfrprsbhhe2bs6bx428i00547hl1327rnwdqyvv7cerwpkwgpvor3041xvss357a7czmic5gz9j59lgd6ekhu3zzrn9tl30qtz35skvqlnemna6 == \u\w\c\l\i\l\9\j\l\t\0\k\7\v\c\h\e\r\b\y\p\4\6\g\8\6\m\v\n\0\n\q\u\3\e\m\v\q\n\n\p\l\4\b\u\1\j\k\n\c\a\1\3\g\x\8\t\e\2\v\3\d\s\t\u\f\n\6\w\t\c\4\p\p\j\k\3\f\y\a\9\e\o\p\q\6\x\k\b\5\v\j\7\g\z\n\s\c\c\l\l\c\z\y\w\a\j\1\6\t\p\m\o\j\0\r\f\y\a\b\v\0\8\p\y\3\d\y\u\7\x\r\l\c\9\z\j\u\u\a\g\t\c\w\j\q\n\r\0\8\9\5\e\x\y\i\a\k\i\4\7\a\n\q\w\g\f\x\b\z\3\2\c\j\d\j\0\m\n\8\1\5\u\7\j\i\4\f\0\l\d\y\a\3\8\x\p\u\1\q\9\c\h\g\u\8\5\c\6\f\b\b\b\q\v\u\3\z\y\y\y\q\n\t\8\a\j\4\5\u\f\5\x\3\j\h\6\b\0\i\1\4\m\0\g\x\t\j\h\9\k\r\j\m\y\0\f\o\0\j\n\w\a\8\q\a\r\h\2\9\2\j\s\9\3\k\z\7\9\x\y\p\r\q\7\u\t\8\n\z\c\i\m\k\p\7\z\6\3\2\e\t\x\x\a\n\h\5\m\h\m\w\l\y\o\l\a\a\1\5\z\q\a\g\d\9\6\i\r\c\9\k\7\a\6\j\u\f\r\s\a\f\j\n\b\n\a\s\p\3\g\9\f\c\a\d\i\w\v\8\e\0\g\3\3\v\x\x\n\k\w\z\9\x\1\4\m\1\h\8\5\h\i\q\a\z\0\z\z\c\t\x\v\6\k\8\z\d\k\u\y\q\f\r\p\r\s\b\h\h\e\2\b\s\6\b\x\4\2\8\i\0\0\5\4\7\h\l\1\3\2\7\r\n\w\d\q\y\v\v\7\c\e\r\w\p\k\w\g\p\v\o\r\3\0\4\1\x\v\s\s\3\5\7\a\7\c\z\m\i\c\5\g\z\9\j\5\9\l\g\d\6\e\k\h\u\3\z\z\r\n\9\t\l\3\0\q\t\z\3\5\s\k\v\q\l\n\e\m\n\a\6 ]] 00:28:21.093 16:43:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:21.093 16:43:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:21.093 [2024-07-11 16:43:57.825731] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:21.093 [2024-07-11 16:43:57.825929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138938 ] 00:28:21.351 [2024-07-11 16:43:57.991164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.351 [2024-07-11 16:43:58.152373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.543  Copying: 512/512 [B] (average 250 kBps) 00:28:22.543 00:28:22.543 16:43:59 -- dd/posix.sh@93 -- # [[ uwclil9jlt0k7vcherbyp46g86mvn0nqu3emvqnnpl4bu1jknca13gx8te2v3dstufn6wtc4ppjk3fya9eopq6xkb5vj7gznsccllczywaj16tpmoj0rfyabv08py3dyu7xrlc9zjuuagtcwjqnr0895exyiaki47anqwgfxbz32cjdj0mn815u7ji4f0ldya38xpu1q9chgu85c6fbbbqvu3zyyyqnt8aj45uf5x3jh6b0i14m0gxtjh9krjmy0fo0jnwa8qarh292js93kz79xyprq7ut8nzcimkp7z632etxxanh5mhmwlyolaa15zqagd96irc9k7a6jufrsafjnbnasp3g9fcadiwv8e0g33vxxnkwz9x14m1h85hiqaz0zzctxv6k8zdkuyqfrprsbhhe2bs6bx428i00547hl1327rnwdqyvv7cerwpkwgpvor3041xvss357a7czmic5gz9j59lgd6ekhu3zzrn9tl30qtz35skvqlnemna6 == \u\w\c\l\i\l\9\j\l\t\0\k\7\v\c\h\e\r\b\y\p\4\6\g\8\6\m\v\n\0\n\q\u\3\e\m\v\q\n\n\p\l\4\b\u\1\j\k\n\c\a\1\3\g\x\8\t\e\2\v\3\d\s\t\u\f\n\6\w\t\c\4\p\p\j\k\3\f\y\a\9\e\o\p\q\6\x\k\b\5\v\j\7\g\z\n\s\c\c\l\l\c\z\y\w\a\j\1\6\t\p\m\o\j\0\r\f\y\a\b\v\0\8\p\y\3\d\y\u\7\x\r\l\c\9\z\j\u\u\a\g\t\c\w\j\q\n\r\0\8\9\5\e\x\y\i\a\k\i\4\7\a\n\q\w\g\f\x\b\z\3\2\c\j\d\j\0\m\n\8\1\5\u\7\j\i\4\f\0\l\d\y\a\3\8\x\p\u\1\q\9\c\h\g\u\8\5\c\6\f\b\b\b\q\v\u\3\z\y\y\y\q\n\t\8\a\j\4\5\u\f\5\x\3\j\h\6\b\0\i\1\4\m\0\g\x\t\j\h\9\k\r\j\m\y\0\f\o\0\j\n\w\a\8\q\a\r\h\2\9\2\j\s\9\3\k\z\7\9\x\y\p\r\q\7\u\t\8\n\z\c\i\m\k\p\7\z\6\3\2\e\t\x\x\a\n\h\5\m\h\m\w\l\y\o\l\a\a\1\5\z\q\a\g\d\9\6\i\r\c\9\k\7\a\6\j\u\f\r\s\a\f\j\n\b\n\a\s\p\3\g\9\f\c\a\d\i\w\v\8\e\0\g\3\3\v\x\x\n\k\w\z\9\x\1\4\m\1\h\8\5\h\i\q\a\z\0\z\z\c\t\x\v\6\k\8\z\d\k\u\y\q\f\r\p\r\s\b\h\h\e\2\b\s\6\b\x\4\2\8\i\0\0\5\4\7\h\l\1\3\2\7\r\n\w\d\q\y\v\v\7\c\e\r\w\p\k\w\g\p\v\o\r\3\0\4\1\x\v\s\s\3\5\7\a\7\c\z\m\i\c\5\g\z\9\j\5\9\l\g\d\6\e\k\h\u\3\z\z\r\n\9\t\l\3\0\q\t\z\3\5\s\k\v\q\l\n\e\m\n\a\6 ]] 00:28:22.543 16:43:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:22.543 16:43:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:22.802 [2024-07-11 16:43:59.404966] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:22.802 [2024-07-11 16:43:59.405138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138962 ] 00:28:22.802 [2024-07-11 16:43:59.567884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.060 [2024-07-11 16:43:59.720446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.254  Copying: 512/512 [B] (average 166 kBps) 00:28:24.254 00:28:24.254 16:44:00 -- dd/posix.sh@93 -- # [[ uwclil9jlt0k7vcherbyp46g86mvn0nqu3emvqnnpl4bu1jknca13gx8te2v3dstufn6wtc4ppjk3fya9eopq6xkb5vj7gznsccllczywaj16tpmoj0rfyabv08py3dyu7xrlc9zjuuagtcwjqnr0895exyiaki47anqwgfxbz32cjdj0mn815u7ji4f0ldya38xpu1q9chgu85c6fbbbqvu3zyyyqnt8aj45uf5x3jh6b0i14m0gxtjh9krjmy0fo0jnwa8qarh292js93kz79xyprq7ut8nzcimkp7z632etxxanh5mhmwlyolaa15zqagd96irc9k7a6jufrsafjnbnasp3g9fcadiwv8e0g33vxxnkwz9x14m1h85hiqaz0zzctxv6k8zdkuyqfrprsbhhe2bs6bx428i00547hl1327rnwdqyvv7cerwpkwgpvor3041xvss357a7czmic5gz9j59lgd6ekhu3zzrn9tl30qtz35skvqlnemna6 == \u\w\c\l\i\l\9\j\l\t\0\k\7\v\c\h\e\r\b\y\p\4\6\g\8\6\m\v\n\0\n\q\u\3\e\m\v\q\n\n\p\l\4\b\u\1\j\k\n\c\a\1\3\g\x\8\t\e\2\v\3\d\s\t\u\f\n\6\w\t\c\4\p\p\j\k\3\f\y\a\9\e\o\p\q\6\x\k\b\5\v\j\7\g\z\n\s\c\c\l\l\c\z\y\w\a\j\1\6\t\p\m\o\j\0\r\f\y\a\b\v\0\8\p\y\3\d\y\u\7\x\r\l\c\9\z\j\u\u\a\g\t\c\w\j\q\n\r\0\8\9\5\e\x\y\i\a\k\i\4\7\a\n\q\w\g\f\x\b\z\3\2\c\j\d\j\0\m\n\8\1\5\u\7\j\i\4\f\0\l\d\y\a\3\8\x\p\u\1\q\9\c\h\g\u\8\5\c\6\f\b\b\b\q\v\u\3\z\y\y\y\q\n\t\8\a\j\4\5\u\f\5\x\3\j\h\6\b\0\i\1\4\m\0\g\x\t\j\h\9\k\r\j\m\y\0\f\o\0\j\n\w\a\8\q\a\r\h\2\9\2\j\s\9\3\k\z\7\9\x\y\p\r\q\7\u\t\8\n\z\c\i\m\k\p\7\z\6\3\2\e\t\x\x\a\n\h\5\m\h\m\w\l\y\o\l\a\a\1\5\z\q\a\g\d\9\6\i\r\c\9\k\7\a\6\j\u\f\r\s\a\f\j\n\b\n\a\s\p\3\g\9\f\c\a\d\i\w\v\8\e\0\g\3\3\v\x\x\n\k\w\z\9\x\1\4\m\1\h\8\5\h\i\q\a\z\0\z\z\c\t\x\v\6\k\8\z\d\k\u\y\q\f\r\p\r\s\b\h\h\e\2\b\s\6\b\x\4\2\8\i\0\0\5\4\7\h\l\1\3\2\7\r\n\w\d\q\y\v\v\7\c\e\r\w\p\k\w\g\p\v\o\r\3\0\4\1\x\v\s\s\3\5\7\a\7\c\z\m\i\c\5\g\z\9\j\5\9\l\g\d\6\e\k\h\u\3\z\z\r\n\9\t\l\3\0\q\t\z\3\5\s\k\v\q\l\n\e\m\n\a\6 ]] 00:28:24.254 00:28:24.254 real 0m12.648s 00:28:24.254 user 0m9.826s 00:28:24.254 sys 0m1.716s 00:28:24.254 16:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.254 16:44:00 -- common/autotest_common.sh@10 -- # set +x 00:28:24.254 ************************************ 00:28:24.254 END TEST dd_flags_misc_forced_aio 00:28:24.254 ************************************ 00:28:24.254 16:44:00 -- dd/posix.sh@1 -- # cleanup 00:28:24.254 16:44:00 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:24.254 16:44:00 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:24.254 00:28:24.254 real 0m52.795s 00:28:24.254 user 0m39.642s 00:28:24.254 sys 0m6.980s 00:28:24.254 16:44:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.254 ************************************ 00:28:24.254 END TEST spdk_dd_posix 00:28:24.254 ************************************ 00:28:24.254 16:44:00 -- common/autotest_common.sh@10 -- # set +x 00:28:24.254 16:44:00 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:24.254 16:44:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:24.254 16:44:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:24.254 16:44:00 -- common/autotest_common.sh@10 -- # set +x 00:28:24.254 ************************************ 00:28:24.254 START TEST spdk_dd_malloc 00:28:24.254 ************************************ 00:28:24.254 16:44:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:24.513 * Looking for test storage... 00:28:24.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:24.513 16:44:01 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:24.513 16:44:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.513 16:44:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.513 16:44:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.513 16:44:01 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:24.513 16:44:01 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:24.513 16:44:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:24.513 16:44:01 -- paths/export.sh@5 -- # export PATH 00:28:24.513 16:44:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:24.513 16:44:01 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:28:24.513 16:44:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:24.513 16:44:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:24.513 16:44:01 -- common/autotest_common.sh@10 -- # set +x 00:28:24.513 ************************************ 00:28:24.513 START TEST dd_malloc_copy 00:28:24.513 ************************************ 00:28:24.513 16:44:01 -- common/autotest_common.sh@1104 -- # malloc_copy 00:28:24.513 16:44:01 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:28:24.513 16:44:01 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:28:24.513 16:44:01 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:28:24.513 16:44:01 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:28:24.513 16:44:01 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:28:24.513 16:44:01 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:28:24.513 16:44:01 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:28:24.513 16:44:01 -- dd/malloc.sh@28 -- # gen_conf 00:28:24.513 16:44:01 -- dd/common.sh@31 -- # xtrace_disable 00:28:24.513 16:44:01 -- common/autotest_common.sh@10 -- # set +x 00:28:24.513 [2024-07-11 16:44:01.167096] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:24.513 [2024-07-11 16:44:01.167282] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139044 ] 00:28:24.513 { 00:28:24.513 "subsystems": [ 00:28:24.513 { 00:28:24.513 "subsystem": "bdev", 00:28:24.513 "config": [ 00:28:24.513 { 00:28:24.513 "params": { 00:28:24.513 "num_blocks": 1048576, 00:28:24.513 "block_size": 512, 00:28:24.513 "name": "malloc0" 00:28:24.513 }, 00:28:24.513 "method": "bdev_malloc_create" 00:28:24.513 }, 00:28:24.513 { 00:28:24.513 "params": { 00:28:24.513 "num_blocks": 1048576, 00:28:24.513 "block_size": 512, 00:28:24.513 "name": "malloc1" 00:28:24.513 }, 00:28:24.513 "method": "bdev_malloc_create" 00:28:24.513 }, 00:28:24.513 { 00:28:24.513 "method": "bdev_wait_for_examine" 00:28:24.513 } 00:28:24.513 ] 00:28:24.513 } 00:28:24.513 ] 00:28:24.513 } 00:28:24.771 [2024-07-11 16:44:01.334225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.771 [2024-07-11 16:44:01.504212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.416  Copying: 227/512 [MB] (227 MBps) Copying: 452/512 [MB] (225 MBps) Copying: 512/512 [MB] (average 226 MBps) 00:28:31.416 00:28:31.416 16:44:07 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:28:31.416 16:44:07 -- dd/malloc.sh@33 -- # gen_conf 00:28:31.416 16:44:07 -- dd/common.sh@31 -- # xtrace_disable 00:28:31.416 16:44:07 -- common/autotest_common.sh@10 -- # set +x 00:28:31.416 [2024-07-11 16:44:07.960600] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:31.416 [2024-07-11 16:44:07.961827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139158 ] 00:28:31.416 { 00:28:31.416 "subsystems": [ 00:28:31.416 { 00:28:31.416 "subsystem": "bdev", 00:28:31.416 "config": [ 00:28:31.416 { 00:28:31.416 "params": { 00:28:31.416 "num_blocks": 1048576, 00:28:31.416 "block_size": 512, 00:28:31.416 "name": "malloc0" 00:28:31.416 }, 00:28:31.416 "method": "bdev_malloc_create" 00:28:31.416 }, 00:28:31.416 { 00:28:31.416 "params": { 00:28:31.416 "num_blocks": 1048576, 00:28:31.416 "block_size": 512, 00:28:31.416 "name": "malloc1" 00:28:31.416 }, 00:28:31.416 "method": "bdev_malloc_create" 00:28:31.416 }, 00:28:31.416 { 00:28:31.416 "method": "bdev_wait_for_examine" 00:28:31.416 } 00:28:31.416 ] 00:28:31.416 } 00:28:31.416 ] 00:28:31.416 } 00:28:31.416 [2024-07-11 16:44:08.129143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.674 [2024-07-11 16:44:08.295212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.341  Copying: 222/512 [MB] (222 MBps) Copying: 446/512 [MB] (223 MBps) Copying: 512/512 [MB] (average 223 MBps) 00:28:38.341 00:28:38.341 ************************************ 00:28:38.341 END TEST dd_malloc_copy 00:28:38.341 ************************************ 00:28:38.341 00:28:38.341 real 0m13.580s 00:28:38.341 user 0m12.445s 00:28:38.341 sys 0m1.017s 00:28:38.341 16:44:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.341 16:44:14 -- common/autotest_common.sh@10 -- # set +x 00:28:38.341 00:28:38.341 real 0m13.710s 00:28:38.341 user 0m12.534s 00:28:38.341 sys 0m1.058s 00:28:38.341 16:44:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:38.341 ************************************ 00:28:38.341 16:44:14 -- common/autotest_common.sh@10 -- # set +x 00:28:38.341 END TEST spdk_dd_malloc 00:28:38.341 ************************************ 00:28:38.341 16:44:14 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:38.341 16:44:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:38.341 16:44:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:38.341 16:44:14 -- common/autotest_common.sh@10 -- # set +x 00:28:38.341 ************************************ 00:28:38.341 START TEST spdk_dd_bdev_to_bdev 00:28:38.341 ************************************ 00:28:38.341 16:44:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:38.341 * Looking for test storage... 00:28:38.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:38.341 16:44:14 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:38.341 16:44:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:38.341 16:44:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:38.341 16:44:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:38.341 16:44:14 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:38.341 16:44:14 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:38.341 16:44:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:38.341 16:44:14 -- paths/export.sh@5 -- # export PATH 00:28:38.341 16:44:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:38.341 16:44:14 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:28:38.341 16:44:14 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:28:38.342 16:44:14 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:28:38.342 [2024-07-11 16:44:14.910936] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:38.342 [2024-07-11 16:44:14.911145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139318 ] 00:28:38.342 [2024-07-11 16:44:15.077568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.601 [2024-07-11 16:44:15.249363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.105  Copying: 256/256 [MB] (average 1497 MBps) 00:28:40.105 00:28:40.105 16:44:16 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:40.105 16:44:16 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:40.105 16:44:16 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:28:40.105 16:44:16 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:28:40.105 16:44:16 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:28:40.105 16:44:16 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:28:40.105 16:44:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:40.105 16:44:16 -- common/autotest_common.sh@10 -- # set +x 00:28:40.105 ************************************ 00:28:40.105 START TEST dd_inflate_file 00:28:40.105 ************************************ 00:28:40.105 16:44:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:28:40.105 [2024-07-11 16:44:16.703961] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:40.105 [2024-07-11 16:44:16.704165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139353 ] 00:28:40.105 [2024-07-11 16:44:16.869206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.364 [2024-07-11 16:44:17.022153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:41.560  Copying: 64/64 [MB] (average 1523 MBps) 00:28:41.560 00:28:41.560 00:28:41.560 real 0m1.607s 00:28:41.560 user 0m1.228s 00:28:41.560 sys 0m0.249s 00:28:41.560 16:44:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:41.560 16:44:18 -- common/autotest_common.sh@10 -- # set +x 00:28:41.560 ************************************ 00:28:41.560 END TEST dd_inflate_file 00:28:41.560 ************************************ 00:28:41.560 16:44:18 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:28:41.560 16:44:18 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:28:41.560 16:44:18 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:28:41.560 16:44:18 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:28:41.560 16:44:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:41.560 16:44:18 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:28:41.560 16:44:18 -- common/autotest_common.sh@10 -- # set +x 00:28:41.560 16:44:18 -- dd/common.sh@31 -- # xtrace_disable 00:28:41.560 16:44:18 -- common/autotest_common.sh@10 -- # set +x 00:28:41.560 ************************************ 00:28:41.560 START TEST dd_copy_to_out_bdev 00:28:41.560 ************************************ 00:28:41.560 16:44:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:28:41.560 { 00:28:41.560 "subsystems": [ 00:28:41.560 { 00:28:41.560 "subsystem": "bdev", 00:28:41.560 "config": [ 00:28:41.560 { 00:28:41.560 "params": { 00:28:41.560 "block_size": 4096, 00:28:41.560 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:41.560 "name": "aio1" 00:28:41.560 }, 00:28:41.560 "method": "bdev_aio_create" 00:28:41.560 }, 00:28:41.560 { 00:28:41.560 "params": { 00:28:41.560 "trtype": "pcie", 00:28:41.560 "traddr": "0000:00:06.0", 00:28:41.560 "name": "Nvme0" 00:28:41.560 }, 00:28:41.560 "method": "bdev_nvme_attach_controller" 00:28:41.560 }, 00:28:41.560 { 00:28:41.560 "method": "bdev_wait_for_examine" 00:28:41.560 } 00:28:41.560 ] 00:28:41.560 } 00:28:41.560 ] 00:28:41.560 } 00:28:41.560 [2024-07-11 16:44:18.361083] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:41.560 [2024-07-11 16:44:18.361395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139398 ] 00:28:41.819 [2024-07-11 16:44:18.526654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.078 [2024-07-11 16:44:18.680022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.653  Copying: 45/64 [MB] (45 MBps) Copying: 64/64 [MB] (average 45 MBps) 00:28:44.653 00:28:44.653 00:28:44.653 real 0m3.142s 00:28:44.653 user 0m2.733s 00:28:44.653 sys 0m0.318s 00:28:44.653 16:44:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.653 16:44:21 -- common/autotest_common.sh@10 -- # set +x 00:28:44.653 ************************************ 00:28:44.653 END TEST dd_copy_to_out_bdev 00:28:44.653 ************************************ 00:28:44.912 16:44:21 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:28:44.912 16:44:21 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:28:44.912 16:44:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:44.912 16:44:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:44.912 16:44:21 -- common/autotest_common.sh@10 -- # set +x 00:28:44.912 ************************************ 00:28:44.912 START TEST dd_offset_magic 00:28:44.912 ************************************ 00:28:44.912 16:44:21 -- common/autotest_common.sh@1104 -- # offset_magic 00:28:44.912 16:44:21 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:28:44.912 16:44:21 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:28:44.912 16:44:21 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:28:44.912 16:44:21 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:28:44.912 16:44:21 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:28:44.912 16:44:21 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:28:44.912 16:44:21 -- dd/common.sh@31 -- # xtrace_disable 00:28:44.912 16:44:21 -- common/autotest_common.sh@10 -- # set +x 00:28:44.912 [2024-07-11 16:44:21.565661] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:44.912 [2024-07-11 16:44:21.565855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139464 ] 00:28:44.912 { 00:28:44.912 "subsystems": [ 00:28:44.912 { 00:28:44.912 "subsystem": "bdev", 00:28:44.912 "config": [ 00:28:44.912 { 00:28:44.912 "params": { 00:28:44.912 "block_size": 4096, 00:28:44.912 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:44.912 "name": "aio1" 00:28:44.912 }, 00:28:44.912 "method": "bdev_aio_create" 00:28:44.912 }, 00:28:44.912 { 00:28:44.912 "params": { 00:28:44.912 "trtype": "pcie", 00:28:44.912 "traddr": "0000:00:06.0", 00:28:44.912 "name": "Nvme0" 00:28:44.912 }, 00:28:44.912 "method": "bdev_nvme_attach_controller" 00:28:44.912 }, 00:28:44.912 { 00:28:44.912 "method": "bdev_wait_for_examine" 00:28:44.912 } 00:28:44.912 ] 00:28:44.912 } 00:28:44.912 ] 00:28:44.912 } 00:28:45.171 [2024-07-11 16:44:21.733586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.171 [2024-07-11 16:44:21.897882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.676  Copying: 65/65 [MB] (average 255 MBps) 00:28:46.676 00:28:46.676 16:44:23 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:28:46.676 16:44:23 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:28:46.676 16:44:23 -- dd/common.sh@31 -- # xtrace_disable 00:28:46.676 16:44:23 -- common/autotest_common.sh@10 -- # set +x 00:28:46.934 [2024-07-11 16:44:23.483893] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:46.934 [2024-07-11 16:44:23.484119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139516 ] 00:28:46.934 { 00:28:46.934 "subsystems": [ 00:28:46.934 { 00:28:46.934 "subsystem": "bdev", 00:28:46.934 "config": [ 00:28:46.934 { 00:28:46.934 "params": { 00:28:46.934 "block_size": 4096, 00:28:46.934 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:46.934 "name": "aio1" 00:28:46.934 }, 00:28:46.934 "method": "bdev_aio_create" 00:28:46.934 }, 00:28:46.934 { 00:28:46.934 "params": { 00:28:46.934 "trtype": "pcie", 00:28:46.934 "traddr": "0000:00:06.0", 00:28:46.934 "name": "Nvme0" 00:28:46.934 }, 00:28:46.934 "method": "bdev_nvme_attach_controller" 00:28:46.934 }, 00:28:46.934 { 00:28:46.934 "method": "bdev_wait_for_examine" 00:28:46.934 } 00:28:46.934 ] 00:28:46.934 } 00:28:46.934 ] 00:28:46.934 } 00:28:46.934 [2024-07-11 16:44:23.650706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.192 [2024-07-11 16:44:23.814827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.385  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:48.385 00:28:48.385 16:44:25 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:28:48.385 16:44:25 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:28:48.385 16:44:25 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:28:48.385 16:44:25 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:28:48.385 16:44:25 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:28:48.385 16:44:25 -- dd/common.sh@31 -- # xtrace_disable 00:28:48.385 16:44:25 -- common/autotest_common.sh@10 -- # set +x 00:28:48.645 [2024-07-11 16:44:25.206752] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:48.645 [2024-07-11 16:44:25.206961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139545 ] 00:28:48.645 { 00:28:48.645 "subsystems": [ 00:28:48.645 { 00:28:48.645 "subsystem": "bdev", 00:28:48.645 "config": [ 00:28:48.645 { 00:28:48.645 "params": { 00:28:48.645 "block_size": 4096, 00:28:48.645 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:48.645 "name": "aio1" 00:28:48.645 }, 00:28:48.645 "method": "bdev_aio_create" 00:28:48.645 }, 00:28:48.645 { 00:28:48.645 "params": { 00:28:48.645 "trtype": "pcie", 00:28:48.645 "traddr": "0000:00:06.0", 00:28:48.645 "name": "Nvme0" 00:28:48.645 }, 00:28:48.645 "method": "bdev_nvme_attach_controller" 00:28:48.645 }, 00:28:48.645 { 00:28:48.645 "method": "bdev_wait_for_examine" 00:28:48.645 } 00:28:48.645 ] 00:28:48.645 } 00:28:48.645 ] 00:28:48.645 } 00:28:48.645 [2024-07-11 16:44:25.374219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.903 [2024-07-11 16:44:25.534111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.422  Copying: 65/65 [MB] (average 320 MBps) 00:28:50.422 00:28:50.422 16:44:26 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:28:50.422 16:44:26 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:28:50.422 16:44:26 -- dd/common.sh@31 -- # xtrace_disable 00:28:50.422 16:44:26 -- common/autotest_common.sh@10 -- # set +x 00:28:50.422 [2024-07-11 16:44:27.021921] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:50.422 [2024-07-11 16:44:27.022095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139579 ] 00:28:50.422 { 00:28:50.422 "subsystems": [ 00:28:50.422 { 00:28:50.422 "subsystem": "bdev", 00:28:50.422 "config": [ 00:28:50.422 { 00:28:50.422 "params": { 00:28:50.422 "block_size": 4096, 00:28:50.422 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:50.422 "name": "aio1" 00:28:50.422 }, 00:28:50.422 "method": "bdev_aio_create" 00:28:50.422 }, 00:28:50.422 { 00:28:50.422 "params": { 00:28:50.422 "trtype": "pcie", 00:28:50.422 "traddr": "0000:00:06.0", 00:28:50.422 "name": "Nvme0" 00:28:50.422 }, 00:28:50.422 "method": "bdev_nvme_attach_controller" 00:28:50.422 }, 00:28:50.422 { 00:28:50.422 "method": "bdev_wait_for_examine" 00:28:50.422 } 00:28:50.422 ] 00:28:50.422 } 00:28:50.422 ] 00:28:50.422 } 00:28:50.422 [2024-07-11 16:44:27.174679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.695 [2024-07-11 16:44:27.327606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.886  Copying: 1024/1024 [kB] (average 1000 MBps) 00:28:51.886 00:28:51.886 16:44:28 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:28:51.886 16:44:28 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:28:51.886 00:28:51.886 real 0m7.159s 00:28:51.886 user 0m5.447s 00:28:51.886 sys 0m0.935s 00:28:51.886 16:44:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.886 16:44:28 -- common/autotest_common.sh@10 -- # set +x 00:28:51.886 ************************************ 00:28:51.886 END TEST dd_offset_magic 00:28:51.886 ************************************ 00:28:52.145 16:44:28 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:28:52.145 16:44:28 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:28:52.145 16:44:28 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:52.145 16:44:28 -- dd/common.sh@11 -- # local nvme_ref= 00:28:52.145 16:44:28 -- dd/common.sh@12 -- # local size=4194330 00:28:52.145 16:44:28 -- dd/common.sh@14 -- # local bs=1048576 00:28:52.145 16:44:28 -- dd/common.sh@15 -- # local count=5 00:28:52.145 16:44:28 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:28:52.145 16:44:28 -- dd/common.sh@18 -- # gen_conf 00:28:52.145 16:44:28 -- dd/common.sh@31 -- # xtrace_disable 00:28:52.145 16:44:28 -- common/autotest_common.sh@10 -- # set +x 00:28:52.145 [2024-07-11 16:44:28.760189] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:52.145 [2024-07-11 16:44:28.760381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139623 ] 00:28:52.145 { 00:28:52.145 "subsystems": [ 00:28:52.145 { 00:28:52.145 "subsystem": "bdev", 00:28:52.145 "config": [ 00:28:52.145 { 00:28:52.145 "params": { 00:28:52.145 "block_size": 4096, 00:28:52.145 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:52.145 "name": "aio1" 00:28:52.145 }, 00:28:52.145 "method": "bdev_aio_create" 00:28:52.145 }, 00:28:52.145 { 00:28:52.145 "params": { 00:28:52.145 "trtype": "pcie", 00:28:52.145 "traddr": "0000:00:06.0", 00:28:52.145 "name": "Nvme0" 00:28:52.145 }, 00:28:52.145 "method": "bdev_nvme_attach_controller" 00:28:52.145 }, 00:28:52.145 { 00:28:52.145 "method": "bdev_wait_for_examine" 00:28:52.145 } 00:28:52.145 ] 00:28:52.145 } 00:28:52.145 ] 00:28:52.145 } 00:28:52.145 [2024-07-11 16:44:28.923473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.403 [2024-07-11 16:44:29.089921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.595  Copying: 5120/5120 [kB] (average 1250 MBps) 00:28:53.595 00:28:53.595 16:44:30 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:28:53.595 16:44:30 -- dd/common.sh@10 -- # local bdev=aio1 00:28:53.595 16:44:30 -- dd/common.sh@11 -- # local nvme_ref= 00:28:53.595 16:44:30 -- dd/common.sh@12 -- # local size=4194330 00:28:53.595 16:44:30 -- dd/common.sh@14 -- # local bs=1048576 00:28:53.595 16:44:30 -- dd/common.sh@15 -- # local count=5 00:28:53.595 16:44:30 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:28:53.595 16:44:30 -- dd/common.sh@18 -- # gen_conf 00:28:53.595 16:44:30 -- dd/common.sh@31 -- # xtrace_disable 00:28:53.595 16:44:30 -- common/autotest_common.sh@10 -- # set +x 00:28:53.595 [2024-07-11 16:44:30.383474] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:53.595 [2024-07-11 16:44:30.383676] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139649 ] 00:28:53.595 { 00:28:53.595 "subsystems": [ 00:28:53.595 { 00:28:53.595 "subsystem": "bdev", 00:28:53.595 "config": [ 00:28:53.595 { 00:28:53.595 "params": { 00:28:53.595 "block_size": 4096, 00:28:53.595 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:53.595 "name": "aio1" 00:28:53.595 }, 00:28:53.595 "method": "bdev_aio_create" 00:28:53.595 }, 00:28:53.595 { 00:28:53.595 "params": { 00:28:53.595 "trtype": "pcie", 00:28:53.595 "traddr": "0000:00:06.0", 00:28:53.595 "name": "Nvme0" 00:28:53.595 }, 00:28:53.595 "method": "bdev_nvme_attach_controller" 00:28:53.595 }, 00:28:53.595 { 00:28:53.595 "method": "bdev_wait_for_examine" 00:28:53.595 } 00:28:53.595 ] 00:28:53.595 } 00:28:53.595 ] 00:28:53.595 } 00:28:53.853 [2024-07-11 16:44:30.550173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.111 [2024-07-11 16:44:30.708628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.303  Copying: 5120/5120 [kB] (average 250 MBps) 00:28:55.303 00:28:55.303 16:44:32 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:28:55.303 00:28:55.303 real 0m17.336s 00:28:55.303 user 0m13.427s 00:28:55.303 sys 0m2.523s 00:28:55.303 16:44:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:55.303 16:44:32 -- common/autotest_common.sh@10 -- # set +x 00:28:55.303 ************************************ 00:28:55.303 END TEST spdk_dd_bdev_to_bdev 00:28:55.303 ************************************ 00:28:55.562 16:44:32 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:28:55.562 16:44:32 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:28:55.562 16:44:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:55.562 16:44:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:55.562 16:44:32 -- common/autotest_common.sh@10 -- # set +x 00:28:55.562 ************************************ 00:28:55.562 START TEST spdk_dd_sparse 00:28:55.562 ************************************ 00:28:55.562 16:44:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:28:55.562 * Looking for test storage... 00:28:55.562 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:55.562 16:44:32 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:55.562 16:44:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.562 16:44:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.562 16:44:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.562 16:44:32 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:55.562 16:44:32 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:55.562 16:44:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:55.562 16:44:32 -- paths/export.sh@5 -- # export PATH 00:28:55.562 16:44:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:28:55.562 16:44:32 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:28:55.562 16:44:32 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:28:55.562 16:44:32 -- dd/sparse.sh@110 -- # file1=file_zero1 00:28:55.562 16:44:32 -- dd/sparse.sh@111 -- # file2=file_zero2 00:28:55.562 16:44:32 -- dd/sparse.sh@112 -- # file3=file_zero3 00:28:55.562 16:44:32 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:28:55.562 16:44:32 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:28:55.562 16:44:32 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:28:55.562 16:44:32 -- dd/sparse.sh@118 -- # prepare 00:28:55.562 16:44:32 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:28:55.562 16:44:32 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:28:55.562 1+0 records in 00:28:55.562 1+0 records out 00:28:55.562 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00738467 s, 568 MB/s 00:28:55.562 16:44:32 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:28:55.562 1+0 records in 00:28:55.562 1+0 records out 00:28:55.562 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00890921 s, 471 MB/s 00:28:55.562 16:44:32 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:28:55.562 1+0 records in 00:28:55.562 1+0 records out 00:28:55.562 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00684432 s, 613 MB/s 00:28:55.562 16:44:32 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:28:55.562 16:44:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:55.562 16:44:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:55.562 16:44:32 -- common/autotest_common.sh@10 -- # set +x 00:28:55.562 ************************************ 00:28:55.562 START TEST dd_sparse_file_to_file 00:28:55.562 ************************************ 00:28:55.562 16:44:32 -- common/autotest_common.sh@1104 -- # file_to_file 00:28:55.562 16:44:32 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:28:55.562 16:44:32 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:28:55.562 16:44:32 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:28:55.562 16:44:32 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:28:55.562 16:44:32 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:28:55.562 16:44:32 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:28:55.562 16:44:32 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:28:55.562 16:44:32 -- dd/sparse.sh@41 -- # gen_conf 00:28:55.562 16:44:32 -- dd/common.sh@31 -- # xtrace_disable 00:28:55.562 16:44:32 -- common/autotest_common.sh@10 -- # set +x 00:28:55.562 [2024-07-11 16:44:32.341346] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:55.562 [2024-07-11 16:44:32.341536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139731 ] 00:28:55.562 { 00:28:55.562 "subsystems": [ 00:28:55.562 { 00:28:55.562 "subsystem": "bdev", 00:28:55.562 "config": [ 00:28:55.562 { 00:28:55.562 "params": { 00:28:55.562 "block_size": 4096, 00:28:55.562 "filename": "dd_sparse_aio_disk", 00:28:55.562 "name": "dd_aio" 00:28:55.562 }, 00:28:55.562 "method": "bdev_aio_create" 00:28:55.562 }, 00:28:55.562 { 00:28:55.562 "params": { 00:28:55.562 "lvs_name": "dd_lvstore", 00:28:55.562 "bdev_name": "dd_aio" 00:28:55.562 }, 00:28:55.562 "method": "bdev_lvol_create_lvstore" 00:28:55.562 }, 00:28:55.562 { 00:28:55.562 "method": "bdev_wait_for_examine" 00:28:55.562 } 00:28:55.562 ] 00:28:55.562 } 00:28:55.562 ] 00:28:55.562 } 00:28:55.821 [2024-07-11 16:44:32.503217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.081 [2024-07-11 16:44:32.666373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.285  Copying: 12/36 [MB] (average 1200 MBps) 00:28:57.285 00:28:57.285 16:44:34 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:28:57.285 16:44:34 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:28:57.285 16:44:34 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:28:57.285 16:44:34 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:28:57.285 16:44:34 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:28:57.285 16:44:34 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:28:57.285 16:44:34 -- dd/sparse.sh@52 -- # stat1_b=24576 00:28:57.285 16:44:34 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:28:57.285 16:44:34 -- dd/sparse.sh@53 -- # stat2_b=24576 00:28:57.285 16:44:34 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:28:57.285 00:28:57.285 real 0m1.777s 00:28:57.285 user 0m1.429s 00:28:57.285 sys 0m0.221s 00:28:57.285 16:44:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:57.285 16:44:34 -- common/autotest_common.sh@10 -- # set +x 00:28:57.285 ************************************ 00:28:57.285 END TEST dd_sparse_file_to_file 00:28:57.285 ************************************ 00:28:57.543 16:44:34 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:28:57.543 16:44:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:57.543 16:44:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:57.543 16:44:34 -- common/autotest_common.sh@10 -- # set +x 00:28:57.543 ************************************ 00:28:57.543 START TEST dd_sparse_file_to_bdev 00:28:57.543 ************************************ 00:28:57.543 16:44:34 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:28:57.543 16:44:34 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:28:57.543 16:44:34 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:28:57.543 16:44:34 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:28:57.543 16:44:34 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:28:57.543 16:44:34 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:28:57.543 16:44:34 -- dd/sparse.sh@73 -- # gen_conf 00:28:57.543 16:44:34 -- dd/common.sh@31 -- # xtrace_disable 00:28:57.543 16:44:34 -- common/autotest_common.sh@10 -- # set +x 00:28:57.543 [2024-07-11 16:44:34.165340] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:57.543 [2024-07-11 16:44:34.165537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139818 ] 00:28:57.543 { 00:28:57.543 "subsystems": [ 00:28:57.543 { 00:28:57.543 "subsystem": "bdev", 00:28:57.543 "config": [ 00:28:57.543 { 00:28:57.543 "params": { 00:28:57.543 "block_size": 4096, 00:28:57.543 "filename": "dd_sparse_aio_disk", 00:28:57.543 "name": "dd_aio" 00:28:57.543 }, 00:28:57.543 "method": "bdev_aio_create" 00:28:57.543 }, 00:28:57.543 { 00:28:57.543 "params": { 00:28:57.543 "lvs_name": "dd_lvstore", 00:28:57.543 "thin_provision": true, 00:28:57.543 "lvol_name": "dd_lvol", 00:28:57.543 "size": 37748736 00:28:57.543 }, 00:28:57.543 "method": "bdev_lvol_create" 00:28:57.543 }, 00:28:57.543 { 00:28:57.543 "method": "bdev_wait_for_examine" 00:28:57.543 } 00:28:57.543 ] 00:28:57.543 } 00:28:57.543 ] 00:28:57.543 } 00:28:57.543 [2024-07-11 16:44:34.330764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.802 [2024-07-11 16:44:34.485458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.060 [2024-07-11 16:44:34.738142] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:28:58.060  Copying: 12/36 [MB] (average 923 MBps)[2024-07-11 16:44:34.785753] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:28:58.994 00:28:58.994 00:28:59.253 00:28:59.253 real 0m1.702s 00:28:59.253 user 0m1.362s 00:28:59.253 sys 0m0.248s 00:28:59.253 16:44:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:59.253 ************************************ 00:28:59.253 END TEST dd_sparse_file_to_bdev 00:28:59.253 ************************************ 00:28:59.253 16:44:35 -- common/autotest_common.sh@10 -- # set +x 00:28:59.253 16:44:35 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:28:59.253 16:44:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:59.253 16:44:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:59.253 16:44:35 -- common/autotest_common.sh@10 -- # set +x 00:28:59.253 ************************************ 00:28:59.253 START TEST dd_sparse_bdev_to_file 00:28:59.253 ************************************ 00:28:59.253 16:44:35 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:28:59.253 16:44:35 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:28:59.253 16:44:35 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:28:59.253 16:44:35 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:28:59.253 16:44:35 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:28:59.253 16:44:35 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:28:59.253 16:44:35 -- dd/sparse.sh@91 -- # gen_conf 00:28:59.253 16:44:35 -- dd/common.sh@31 -- # xtrace_disable 00:28:59.253 16:44:35 -- common/autotest_common.sh@10 -- # set +x 00:28:59.253 [2024-07-11 16:44:35.916462] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:59.253 [2024-07-11 16:44:35.916641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139869 ] 00:28:59.253 { 00:28:59.253 "subsystems": [ 00:28:59.253 { 00:28:59.253 "subsystem": "bdev", 00:28:59.253 "config": [ 00:28:59.253 { 00:28:59.253 "params": { 00:28:59.253 "block_size": 4096, 00:28:59.253 "filename": "dd_sparse_aio_disk", 00:28:59.253 "name": "dd_aio" 00:28:59.253 }, 00:28:59.253 "method": "bdev_aio_create" 00:28:59.253 }, 00:28:59.253 { 00:28:59.253 "method": "bdev_wait_for_examine" 00:28:59.253 } 00:28:59.253 ] 00:28:59.253 } 00:28:59.253 ] 00:28:59.253 } 00:28:59.511 [2024-07-11 16:44:36.083017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.511 [2024-07-11 16:44:36.242634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.145  Copying: 12/36 [MB] (average 1200 MBps) 00:29:01.145 00:29:01.145 16:44:37 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:29:01.145 16:44:37 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:29:01.145 16:44:37 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:29:01.145 16:44:37 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:29:01.145 16:44:37 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:29:01.145 16:44:37 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:29:01.145 16:44:37 -- dd/sparse.sh@102 -- # stat2_b=24576 00:29:01.145 16:44:37 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:29:01.145 16:44:37 -- dd/sparse.sh@103 -- # stat3_b=24576 00:29:01.145 16:44:37 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:29:01.145 00:29:01.145 real 0m1.733s 00:29:01.145 user 0m1.409s 00:29:01.145 sys 0m0.226s 00:29:01.145 16:44:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.145 16:44:37 -- common/autotest_common.sh@10 -- # set +x 00:29:01.145 ************************************ 00:29:01.145 END TEST dd_sparse_bdev_to_file 00:29:01.145 ************************************ 00:29:01.145 16:44:37 -- dd/sparse.sh@1 -- # cleanup 00:29:01.145 16:44:37 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:29:01.145 16:44:37 -- dd/sparse.sh@12 -- # rm file_zero1 00:29:01.145 16:44:37 -- dd/sparse.sh@13 -- # rm file_zero2 00:29:01.145 16:44:37 -- dd/sparse.sh@14 -- # rm file_zero3 00:29:01.145 00:29:01.145 real 0m5.487s 00:29:01.145 user 0m4.349s 00:29:01.145 sys 0m0.817s 00:29:01.145 16:44:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.145 16:44:37 -- common/autotest_common.sh@10 -- # set +x 00:29:01.145 ************************************ 00:29:01.145 END TEST spdk_dd_sparse 00:29:01.145 ************************************ 00:29:01.145 16:44:37 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:01.145 16:44:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:01.145 16:44:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.145 16:44:37 -- common/autotest_common.sh@10 -- # set +x 00:29:01.145 ************************************ 00:29:01.145 START TEST spdk_dd_negative 00:29:01.145 ************************************ 00:29:01.145 16:44:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:29:01.145 * Looking for test storage... 00:29:01.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:29:01.145 16:44:37 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:01.145 16:44:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.145 16:44:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.145 16:44:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.145 16:44:37 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:01.145 16:44:37 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:01.145 16:44:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:01.145 16:44:37 -- paths/export.sh@5 -- # export PATH 00:29:01.145 16:44:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:01.145 16:44:37 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:01.145 16:44:37 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:01.145 16:44:37 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:01.145 16:44:37 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:29:01.145 16:44:37 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:29:01.145 16:44:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:01.145 16:44:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.145 16:44:37 -- common/autotest_common.sh@10 -- # set +x 00:29:01.145 ************************************ 00:29:01.145 START TEST dd_invalid_arguments 00:29:01.145 ************************************ 00:29:01.145 16:44:37 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:29:01.145 16:44:37 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:01.145 16:44:37 -- common/autotest_common.sh@640 -- # local es=0 00:29:01.145 16:44:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:01.145 16:44:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.145 16:44:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.145 16:44:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.145 16:44:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.146 16:44:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.146 16:44:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.146 16:44:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.146 16:44:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:01.146 16:44:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:29:01.146 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:29:01.146 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:29:01.146 options: 00:29:01.146 -c, --config JSON config file (default none) 00:29:01.146 --json JSON config file (default none) 00:29:01.146 --json-ignore-init-errors 00:29:01.146 don't exit on invalid config entry 00:29:01.146 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:29:01.146 -g, --single-file-segments 00:29:01.146 force creating just one hugetlbfs file 00:29:01.146 -h, --help show this usage 00:29:01.146 -i, --shm-id shared memory ID (optional) 00:29:01.146 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:29:01.146 --lcores lcore to CPU mapping list. The list is in the format: 00:29:01.146 [<,lcores[@CPUs]>...] 00:29:01.146 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:29:01.146 Within the group, '-' is used for range separator, 00:29:01.146 ',' is used for single number separator. 00:29:01.146 '( )' can be omitted for single element group, 00:29:01.146 '@' can be omitted if cpus and lcores have the same value 00:29:01.146 -n, --mem-channels channel number of memory channels used for DPDK 00:29:01.146 -p, --main-core main (primary) core for DPDK 00:29:01.146 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:29:01.146 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:29:01.146 --disable-cpumask-locks Disable CPU core lock files. 00:29:01.146 --silence-noticelog disable notice level logging to stderr 00:29:01.146 --msg-mempool-size global message memory pool size in count (default: 262143) 00:29:01.146 -u, --no-pci disable PCI access 00:29:01.146 --wait-for-rpc wait for RPCs to initialize subsystems 00:29:01.146 --max-delay maximum reactor delay (in microseconds) 00:29:01.146 -B, --pci-blocked pci addr to block (can be used more than once) 00:29:01.146 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:29:01.146 -R, --huge-unlink unlink huge files after initialization 00:29:01.146 -v, --version print SPDK version 00:29:01.146 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:29:01.146 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:29:01.146 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:29:01.146 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:29:01.146 Tracepoints vary in size and can use more than one trace entry. 00:29:01.146 --rpcs-allowed comma-separated list of permitted RPCS 00:29:01.146 --env-context Opaque context for use of the env implementation 00:29:01.146 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:29:01.146 --no-huge run without using hugepages 00:29:01.146 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:29:01.146 -e, --tpoint-group [:] 00:29:01.146 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:29:01.146 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:29:01.146 Groups and [2024-07-11 16:44:37.845533] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:29:01.146 masks can be combined (e.g. thread,bdev:0x1). 00:29:01.146 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:29:01.146 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:29:01.146 [--------- DD Options ---------] 00:29:01.146 --if Input file. Must specify either --if or --ib. 00:29:01.146 --ib Input bdev. Must specifier either --if or --ib 00:29:01.146 --of Output file. Must specify either --of or --ob. 00:29:01.146 --ob Output bdev. Must specify either --of or --ob. 00:29:01.146 --iflag Input file flags. 00:29:01.146 --oflag Output file flags. 00:29:01.146 --bs I/O unit size (default: 4096) 00:29:01.146 --qd Queue depth (default: 2) 00:29:01.146 --count I/O unit count. The number of I/O units to copy. (default: all) 00:29:01.146 --skip Skip this many I/O units at start of input. (default: 0) 00:29:01.146 --seek Skip this many I/O units at start of output. (default: 0) 00:29:01.146 --aio Force usage of AIO. (by default io_uring is used if available) 00:29:01.146 --sparse Enable hole skipping in input target 00:29:01.146 Available iflag and oflag values: 00:29:01.146 append - append mode 00:29:01.146 direct - use direct I/O for data 00:29:01.146 directory - fail unless a directory 00:29:01.146 dsync - use synchronized I/O for data 00:29:01.146 noatime - do not update access time 00:29:01.146 noctty - do not assign controlling terminal from file 00:29:01.146 nofollow - do not follow symlinks 00:29:01.146 nonblock - use non-blocking I/O 00:29:01.146 sync - use synchronized I/O for data and metadata 00:29:01.146 16:44:37 -- common/autotest_common.sh@643 -- # es=2 00:29:01.146 16:44:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:01.146 16:44:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:01.146 16:44:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:01.146 00:29:01.146 real 0m0.105s 00:29:01.146 user 0m0.049s 00:29:01.146 sys 0m0.056s 00:29:01.146 16:44:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.146 16:44:37 -- common/autotest_common.sh@10 -- # set +x 00:29:01.146 ************************************ 00:29:01.146 END TEST dd_invalid_arguments 00:29:01.146 ************************************ 00:29:01.146 16:44:37 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:29:01.146 16:44:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:01.146 16:44:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.146 16:44:37 -- common/autotest_common.sh@10 -- # set +x 00:29:01.146 ************************************ 00:29:01.146 START TEST dd_double_input 00:29:01.146 ************************************ 00:29:01.146 16:44:37 -- common/autotest_common.sh@1104 -- # double_input 00:29:01.146 16:44:37 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:01.146 16:44:37 -- common/autotest_common.sh@640 -- # local es=0 00:29:01.146 16:44:37 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:01.146 16:44:37 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.146 16:44:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.146 16:44:37 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.146 16:44:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.146 16:44:37 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.146 16:44:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.146 16:44:37 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.146 16:44:37 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:01.146 16:44:37 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:29:01.405 [2024-07-11 16:44:37.990784] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:29:01.405 16:44:38 -- common/autotest_common.sh@643 -- # es=22 00:29:01.405 16:44:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:01.405 16:44:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:01.405 16:44:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:01.405 00:29:01.405 real 0m0.101s 00:29:01.405 user 0m0.050s 00:29:01.405 sys 0m0.051s 00:29:01.405 16:44:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.405 ************************************ 00:29:01.405 END TEST dd_double_input 00:29:01.405 ************************************ 00:29:01.405 16:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:01.405 16:44:38 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:29:01.405 16:44:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:01.405 16:44:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.405 16:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:01.405 ************************************ 00:29:01.405 START TEST dd_double_output 00:29:01.405 ************************************ 00:29:01.405 16:44:38 -- common/autotest_common.sh@1104 -- # double_output 00:29:01.405 16:44:38 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:01.405 16:44:38 -- common/autotest_common.sh@640 -- # local es=0 00:29:01.405 16:44:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:01.405 16:44:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.405 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.405 16:44:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.405 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.405 16:44:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.405 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.405 16:44:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.405 16:44:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:01.405 16:44:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:29:01.405 [2024-07-11 16:44:38.139430] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:29:01.405 16:44:38 -- common/autotest_common.sh@643 -- # es=22 00:29:01.405 16:44:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:01.405 16:44:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:01.405 16:44:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:01.405 00:29:01.405 real 0m0.102s 00:29:01.405 user 0m0.048s 00:29:01.405 sys 0m0.055s 00:29:01.405 16:44:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.405 16:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:01.405 ************************************ 00:29:01.405 END TEST dd_double_output 00:29:01.405 ************************************ 00:29:01.664 16:44:38 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:29:01.664 16:44:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:01.664 16:44:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.664 16:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:01.664 ************************************ 00:29:01.664 START TEST dd_no_input 00:29:01.664 ************************************ 00:29:01.664 16:44:38 -- common/autotest_common.sh@1104 -- # no_input 00:29:01.664 16:44:38 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:01.664 16:44:38 -- common/autotest_common.sh@640 -- # local es=0 00:29:01.664 16:44:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:01.664 16:44:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.664 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.664 16:44:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.664 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.664 16:44:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.664 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.664 16:44:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.664 16:44:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:01.664 16:44:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:29:01.664 [2024-07-11 16:44:38.270809] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:29:01.664 16:44:38 -- common/autotest_common.sh@643 -- # es=22 00:29:01.664 16:44:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:01.664 16:44:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:01.664 16:44:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:01.664 00:29:01.664 real 0m0.083s 00:29:01.664 user 0m0.052s 00:29:01.664 sys 0m0.031s 00:29:01.664 16:44:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.664 16:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:01.664 ************************************ 00:29:01.664 END TEST dd_no_input 00:29:01.664 ************************************ 00:29:01.664 16:44:38 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:29:01.664 16:44:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:01.664 16:44:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.664 16:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:01.664 ************************************ 00:29:01.664 START TEST dd_no_output 00:29:01.664 ************************************ 00:29:01.664 16:44:38 -- common/autotest_common.sh@1104 -- # no_output 00:29:01.664 16:44:38 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:01.664 16:44:38 -- common/autotest_common.sh@640 -- # local es=0 00:29:01.664 16:44:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:01.664 16:44:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.664 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.664 16:44:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.664 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.664 16:44:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.664 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.664 16:44:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.664 16:44:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:01.664 16:44:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:29:01.664 [2024-07-11 16:44:38.415336] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:29:01.664 16:44:38 -- common/autotest_common.sh@643 -- # es=22 00:29:01.664 16:44:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:01.664 16:44:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:01.664 16:44:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:01.664 00:29:01.664 real 0m0.100s 00:29:01.664 user 0m0.052s 00:29:01.664 sys 0m0.048s 00:29:01.664 16:44:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.664 16:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:01.664 ************************************ 00:29:01.664 END TEST dd_no_output 00:29:01.664 ************************************ 00:29:01.923 16:44:38 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:29:01.923 16:44:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:01.923 16:44:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.923 16:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:01.923 ************************************ 00:29:01.923 START TEST dd_wrong_blocksize 00:29:01.923 ************************************ 00:29:01.923 16:44:38 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:29:01.923 16:44:38 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:01.923 16:44:38 -- common/autotest_common.sh@640 -- # local es=0 00:29:01.923 16:44:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:01.923 16:44:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.923 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.923 16:44:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.923 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.923 16:44:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.923 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.923 16:44:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.923 16:44:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:01.923 16:44:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:29:01.923 [2024-07-11 16:44:38.566471] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:29:01.923 16:44:38 -- common/autotest_common.sh@643 -- # es=22 00:29:01.923 16:44:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:01.923 16:44:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:01.923 16:44:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:01.923 00:29:01.923 real 0m0.101s 00:29:01.923 user 0m0.035s 00:29:01.923 sys 0m0.067s 00:29:01.923 16:44:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.923 16:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:01.923 ************************************ 00:29:01.923 END TEST dd_wrong_blocksize 00:29:01.923 ************************************ 00:29:01.923 16:44:38 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:29:01.923 16:44:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:01.923 16:44:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:01.923 16:44:38 -- common/autotest_common.sh@10 -- # set +x 00:29:01.923 ************************************ 00:29:01.923 START TEST dd_smaller_blocksize 00:29:01.923 ************************************ 00:29:01.923 16:44:38 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:29:01.923 16:44:38 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:01.923 16:44:38 -- common/autotest_common.sh@640 -- # local es=0 00:29:01.923 16:44:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:01.923 16:44:38 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.923 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.923 16:44:38 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.923 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.923 16:44:38 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.923 16:44:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:01.923 16:44:38 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:01.923 16:44:38 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:01.923 16:44:38 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:29:01.923 [2024-07-11 16:44:38.719575] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:01.923 [2024-07-11 16:44:38.719750] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140130 ] 00:29:02.182 [2024-07-11 16:44:38.885510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.441 [2024-07-11 16:44:39.106829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.008 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:29:03.008 [2024-07-11 16:44:39.637833] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:29:03.008 [2024-07-11 16:44:39.637936] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:03.575 [2024-07-11 16:44:40.210948] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:03.834 16:44:40 -- common/autotest_common.sh@643 -- # es=244 00:29:03.834 16:44:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:03.834 16:44:40 -- common/autotest_common.sh@652 -- # es=116 00:29:03.834 16:44:40 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:03.834 16:44:40 -- common/autotest_common.sh@660 -- # es=1 00:29:03.834 16:44:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:03.834 00:29:03.834 real 0m1.874s 00:29:03.834 user 0m1.356s 00:29:03.835 sys 0m0.417s 00:29:03.835 ************************************ 00:29:03.835 END TEST dd_smaller_blocksize 00:29:03.835 ************************************ 00:29:03.835 16:44:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:03.835 16:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:03.835 16:44:40 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:29:03.835 16:44:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:03.835 16:44:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:03.835 16:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:03.835 ************************************ 00:29:03.835 START TEST dd_invalid_count 00:29:03.835 ************************************ 00:29:03.835 16:44:40 -- common/autotest_common.sh@1104 -- # invalid_count 00:29:03.835 16:44:40 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:03.835 16:44:40 -- common/autotest_common.sh@640 -- # local es=0 00:29:03.835 16:44:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:03.835 16:44:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:03.835 16:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:03.835 16:44:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:03.835 16:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:03.835 16:44:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:03.835 16:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:03.835 16:44:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:03.835 16:44:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:03.835 16:44:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:29:03.835 [2024-07-11 16:44:40.641020] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:29:04.094 16:44:40 -- common/autotest_common.sh@643 -- # es=22 00:29:04.094 16:44:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:04.094 16:44:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:04.094 16:44:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:04.094 00:29:04.094 real 0m0.102s 00:29:04.094 user 0m0.064s 00:29:04.094 sys 0m0.037s 00:29:04.094 16:44:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.094 16:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:04.094 ************************************ 00:29:04.094 END TEST dd_invalid_count 00:29:04.094 ************************************ 00:29:04.094 16:44:40 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:29:04.094 16:44:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:04.094 16:44:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:04.094 16:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:04.094 ************************************ 00:29:04.094 START TEST dd_invalid_oflag 00:29:04.094 ************************************ 00:29:04.094 16:44:40 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:29:04.094 16:44:40 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:04.094 16:44:40 -- common/autotest_common.sh@640 -- # local es=0 00:29:04.094 16:44:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:04.094 16:44:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.094 16:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:04.095 16:44:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.095 16:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:04.095 16:44:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.095 16:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:04.095 16:44:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.095 16:44:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:04.095 16:44:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:29:04.095 [2024-07-11 16:44:40.780018] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:29:04.095 16:44:40 -- common/autotest_common.sh@643 -- # es=22 00:29:04.095 16:44:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:04.095 16:44:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:04.095 16:44:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:04.095 ************************************ 00:29:04.095 END TEST dd_invalid_oflag 00:29:04.095 00:29:04.095 real 0m0.099s 00:29:04.095 user 0m0.067s 00:29:04.095 sys 0m0.033s 00:29:04.095 16:44:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.095 16:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:04.095 ************************************ 00:29:04.095 16:44:40 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:29:04.095 16:44:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:04.095 16:44:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:04.095 16:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:04.095 ************************************ 00:29:04.095 START TEST dd_invalid_iflag 00:29:04.095 ************************************ 00:29:04.095 16:44:40 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:29:04.095 16:44:40 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:04.095 16:44:40 -- common/autotest_common.sh@640 -- # local es=0 00:29:04.095 16:44:40 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:04.095 16:44:40 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.095 16:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:04.095 16:44:40 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.095 16:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:04.095 16:44:40 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.095 16:44:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:04.095 16:44:40 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.095 16:44:40 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:04.095 16:44:40 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:29:04.352 [2024-07-11 16:44:40.926455] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:29:04.352 16:44:40 -- common/autotest_common.sh@643 -- # es=22 00:29:04.352 16:44:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:04.352 16:44:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:29:04.352 16:44:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:04.352 00:29:04.352 real 0m0.098s 00:29:04.352 user 0m0.041s 00:29:04.352 sys 0m0.057s 00:29:04.352 16:44:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.352 16:44:40 -- common/autotest_common.sh@10 -- # set +x 00:29:04.352 ************************************ 00:29:04.352 END TEST dd_invalid_iflag 00:29:04.352 ************************************ 00:29:04.352 16:44:41 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:29:04.352 16:44:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:04.352 16:44:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:04.353 16:44:41 -- common/autotest_common.sh@10 -- # set +x 00:29:04.353 ************************************ 00:29:04.353 START TEST dd_unknown_flag 00:29:04.353 ************************************ 00:29:04.353 16:44:41 -- common/autotest_common.sh@1104 -- # unknown_flag 00:29:04.353 16:44:41 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:04.353 16:44:41 -- common/autotest_common.sh@640 -- # local es=0 00:29:04.353 16:44:41 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:04.353 16:44:41 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.353 16:44:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:04.353 16:44:41 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.353 16:44:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:04.353 16:44:41 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.353 16:44:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:04.353 16:44:41 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:04.353 16:44:41 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:04.353 16:44:41 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:29:04.353 [2024-07-11 16:44:41.078725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:04.353 [2024-07-11 16:44:41.079103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140264 ] 00:29:04.610 [2024-07-11 16:44:41.246269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.610 [2024-07-11 16:44:41.408282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.868 [2024-07-11 16:44:41.655525] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:29:04.868 [2024-07-11 16:44:41.655622] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:29:04.868 [2024-07-11 16:44:41.655646] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Invalid argument 00:29:04.868 [2024-07-11 16:44:41.655706] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:05.433 [2024-07-11 16:44:42.240837] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:05.999 16:44:42 -- common/autotest_common.sh@643 -- # es=234 00:29:05.999 16:44:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:05.999 16:44:42 -- common/autotest_common.sh@652 -- # es=106 00:29:05.999 ************************************ 00:29:05.999 END TEST dd_unknown_flag 00:29:05.999 ************************************ 00:29:05.999 16:44:42 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:05.999 16:44:42 -- common/autotest_common.sh@660 -- # es=1 00:29:05.999 16:44:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:05.999 00:29:05.999 real 0m1.550s 00:29:05.999 user 0m1.222s 00:29:05.999 sys 0m0.227s 00:29:05.999 16:44:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.999 16:44:42 -- common/autotest_common.sh@10 -- # set +x 00:29:05.999 16:44:42 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:29:05.999 16:44:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:05.999 16:44:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.999 16:44:42 -- common/autotest_common.sh@10 -- # set +x 00:29:05.999 ************************************ 00:29:05.999 START TEST dd_invalid_json 00:29:05.999 ************************************ 00:29:05.999 16:44:42 -- common/autotest_common.sh@1104 -- # invalid_json 00:29:05.999 16:44:42 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:05.999 16:44:42 -- dd/negative_dd.sh@95 -- # : 00:29:05.999 16:44:42 -- common/autotest_common.sh@640 -- # local es=0 00:29:05.999 16:44:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:05.999 16:44:42 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:05.999 16:44:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:05.999 16:44:42 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:05.999 16:44:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:05.999 16:44:42 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:05.999 16:44:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:29:05.999 16:44:42 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:05.999 16:44:42 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:29:05.999 16:44:42 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:29:05.999 [2024-07-11 16:44:42.686938] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:05.999 [2024-07-11 16:44:42.687132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140305 ] 00:29:06.257 [2024-07-11 16:44:42.858063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.257 [2024-07-11 16:44:43.056876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.257 [2024-07-11 16:44:43.057097] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:29:06.257 [2024-07-11 16:44:43.057141] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:06.257 [2024-07-11 16:44:43.057235] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:29:06.820 16:44:43 -- common/autotest_common.sh@643 -- # es=234 00:29:06.820 16:44:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:29:06.820 16:44:43 -- common/autotest_common.sh@652 -- # es=106 00:29:06.820 16:44:43 -- common/autotest_common.sh@653 -- # case "$es" in 00:29:06.820 16:44:43 -- common/autotest_common.sh@660 -- # es=1 00:29:06.820 ************************************ 00:29:06.820 END TEST dd_invalid_json 00:29:06.820 ************************************ 00:29:06.820 16:44:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:29:06.820 00:29:06.820 real 0m0.751s 00:29:06.820 user 0m0.536s 00:29:06.820 sys 0m0.110s 00:29:06.820 16:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.820 16:44:43 -- common/autotest_common.sh@10 -- # set +x 00:29:06.820 00:29:06.820 real 0m5.713s 00:29:06.820 user 0m3.917s 00:29:06.820 sys 0m1.456s 00:29:06.820 16:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.820 16:44:43 -- common/autotest_common.sh@10 -- # set +x 00:29:06.820 ************************************ 00:29:06.820 END TEST spdk_dd_negative 00:29:06.820 ************************************ 00:29:06.820 00:29:06.820 real 2m15.748s 00:29:06.820 user 1m46.377s 00:29:06.820 sys 0m19.409s 00:29:06.820 16:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.821 ************************************ 00:29:06.821 END TEST spdk_dd 00:29:06.821 16:44:43 -- common/autotest_common.sh@10 -- # set +x 00:29:06.821 ************************************ 00:29:06.821 16:44:43 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:29:06.821 16:44:43 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:06.821 16:44:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:06.821 16:44:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.821 16:44:43 -- common/autotest_common.sh@10 -- # set +x 00:29:06.821 ************************************ 00:29:06.821 START TEST blockdev_nvme 00:29:06.821 ************************************ 00:29:06.821 16:44:43 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:29:06.821 * Looking for test storage... 00:29:06.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:06.821 16:44:43 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:06.821 16:44:43 -- bdev/nbd_common.sh@6 -- # set -e 00:29:06.821 16:44:43 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:06.821 16:44:43 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:06.821 16:44:43 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:06.821 16:44:43 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:06.821 16:44:43 -- bdev/blockdev.sh@18 -- # : 00:29:06.821 16:44:43 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:06.821 16:44:43 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:06.821 16:44:43 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:06.821 16:44:43 -- bdev/blockdev.sh@672 -- # uname -s 00:29:06.821 16:44:43 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:06.821 16:44:43 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:06.821 16:44:43 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:29:06.821 16:44:43 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:06.821 16:44:43 -- bdev/blockdev.sh@682 -- # dek= 00:29:06.821 16:44:43 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:06.821 16:44:43 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:06.821 16:44:43 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:06.821 16:44:43 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:29:06.821 16:44:43 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:29:06.821 16:44:43 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:06.821 16:44:43 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=140418 00:29:06.821 16:44:43 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:06.821 16:44:43 -- bdev/blockdev.sh@47 -- # waitforlisten 140418 00:29:06.821 16:44:43 -- common/autotest_common.sh@819 -- # '[' -z 140418 ']' 00:29:06.821 16:44:43 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:06.821 16:44:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.821 16:44:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:06.821 16:44:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.821 16:44:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:06.821 16:44:43 -- common/autotest_common.sh@10 -- # set +x 00:29:07.078 [2024-07-11 16:44:43.638163] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:07.078 [2024-07-11 16:44:43.638331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140418 ] 00:29:07.078 [2024-07-11 16:44:43.791385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.336 [2024-07-11 16:44:43.956445] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:07.336 [2024-07-11 16:44:43.956668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.712 16:44:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:08.712 16:44:45 -- common/autotest_common.sh@852 -- # return 0 00:29:08.712 16:44:45 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:08.712 16:44:45 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:29:08.712 16:44:45 -- bdev/blockdev.sh@79 -- # local json 00:29:08.712 16:44:45 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:29:08.712 16:44:45 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:08.712 16:44:45 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:29:08.712 16:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.712 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:29:08.712 16:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.712 16:44:45 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:08.712 16:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.712 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:29:08.712 16:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.712 16:44:45 -- bdev/blockdev.sh@738 -- # cat 00:29:08.712 16:44:45 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:08.712 16:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.712 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:29:08.712 16:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.712 16:44:45 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:08.712 16:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.712 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:29:08.712 16:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.712 16:44:45 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:08.712 16:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.712 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:29:08.712 16:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.712 16:44:45 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:08.712 16:44:45 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:08.712 16:44:45 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:08.712 16:44:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.712 16:44:45 -- common/autotest_common.sh@10 -- # set +x 00:29:08.712 16:44:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.712 16:44:45 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:08.712 16:44:45 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:08.712 16:44:45 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "d0542c80-8016-4e51-a0dc-c5da42612655"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d0542c80-8016-4e51-a0dc-c5da42612655",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:29:08.970 16:44:45 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:08.970 16:44:45 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:29:08.970 16:44:45 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:08.970 16:44:45 -- bdev/blockdev.sh@752 -- # killprocess 140418 00:29:08.970 16:44:45 -- common/autotest_common.sh@926 -- # '[' -z 140418 ']' 00:29:08.970 16:44:45 -- common/autotest_common.sh@930 -- # kill -0 140418 00:29:08.970 16:44:45 -- common/autotest_common.sh@931 -- # uname 00:29:08.970 16:44:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:08.970 16:44:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140418 00:29:08.970 16:44:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:08.970 16:44:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:08.970 killing process with pid 140418 00:29:08.970 16:44:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140418' 00:29:08.970 16:44:45 -- common/autotest_common.sh@945 -- # kill 140418 00:29:08.970 16:44:45 -- common/autotest_common.sh@950 -- # wait 140418 00:29:10.875 16:44:47 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:10.875 16:44:47 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:10.875 16:44:47 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:10.875 16:44:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:10.875 16:44:47 -- common/autotest_common.sh@10 -- # set +x 00:29:10.875 ************************************ 00:29:10.875 START TEST bdev_hello_world 00:29:10.875 ************************************ 00:29:10.875 16:44:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:29:10.875 [2024-07-11 16:44:47.371443] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:10.875 [2024-07-11 16:44:47.371818] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140511 ] 00:29:10.875 [2024-07-11 16:44:47.538565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.133 [2024-07-11 16:44:47.691449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.392 [2024-07-11 16:44:48.064397] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:11.392 [2024-07-11 16:44:48.064480] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:29:11.392 [2024-07-11 16:44:48.064541] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:11.392 [2024-07-11 16:44:48.067182] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:11.392 [2024-07-11 16:44:48.067776] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:11.392 [2024-07-11 16:44:48.067826] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:11.392 [2024-07-11 16:44:48.068184] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:11.392 00:29:11.392 [2024-07-11 16:44:48.068231] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:12.330 00:29:12.330 real 0m1.586s 00:29:12.330 user 0m1.273s 00:29:12.330 sys 0m0.213s 00:29:12.330 ************************************ 00:29:12.330 16:44:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:12.330 16:44:48 -- common/autotest_common.sh@10 -- # set +x 00:29:12.330 END TEST bdev_hello_world 00:29:12.330 ************************************ 00:29:12.330 16:44:48 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:12.330 16:44:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:12.330 16:44:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:12.330 16:44:48 -- common/autotest_common.sh@10 -- # set +x 00:29:12.330 ************************************ 00:29:12.330 START TEST bdev_bounds 00:29:12.330 ************************************ 00:29:12.330 16:44:48 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:29:12.330 16:44:48 -- bdev/blockdev.sh@288 -- # bdevio_pid=140549 00:29:12.330 16:44:48 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:12.330 Process bdevio pid: 140549 00:29:12.330 16:44:48 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 140549' 00:29:12.330 16:44:48 -- bdev/blockdev.sh@291 -- # waitforlisten 140549 00:29:12.330 16:44:48 -- common/autotest_common.sh@819 -- # '[' -z 140549 ']' 00:29:12.330 16:44:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.330 16:44:48 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:12.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.330 16:44:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:12.330 16:44:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.330 16:44:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:12.330 16:44:48 -- common/autotest_common.sh@10 -- # set +x 00:29:12.330 [2024-07-11 16:44:49.009212] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:12.330 [2024-07-11 16:44:49.009577] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140549 ] 00:29:12.590 [2024-07-11 16:44:49.183137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:12.590 [2024-07-11 16:44:49.350585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.590 [2024-07-11 16:44:49.350740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:12.590 [2024-07-11 16:44:49.350737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:13.155 16:44:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:13.155 16:44:49 -- common/autotest_common.sh@852 -- # return 0 00:29:13.155 16:44:49 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:13.413 I/O targets: 00:29:13.413 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:13.413 00:29:13.413 00:29:13.413 CUnit - A unit testing framework for C - Version 2.1-3 00:29:13.413 http://cunit.sourceforge.net/ 00:29:13.413 00:29:13.413 00:29:13.413 Suite: bdevio tests on: Nvme0n1 00:29:13.413 Test: blockdev write read block ...passed 00:29:13.413 Test: blockdev write zeroes read block ...passed 00:29:13.413 Test: blockdev write zeroes read no split ...passed 00:29:13.413 Test: blockdev write zeroes read split ...passed 00:29:13.413 Test: blockdev write zeroes read split partial ...passed 00:29:13.413 Test: blockdev reset ...[2024-07-11 16:44:50.014672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:13.413 [2024-07-11 16:44:50.018161] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:13.413 passed 00:29:13.413 Test: blockdev write read 8 blocks ...passed 00:29:13.413 Test: blockdev write read size > 128k ...passed 00:29:13.413 Test: blockdev write read invalid size ...passed 00:29:13.413 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:13.413 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:13.413 Test: blockdev write read max offset ...passed 00:29:13.413 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:13.413 Test: blockdev writev readv 8 blocks ...passed 00:29:13.413 Test: blockdev writev readv 30 x 1block ...passed 00:29:13.413 Test: blockdev writev readv block ...passed 00:29:13.413 Test: blockdev writev readv size > 128k ...passed 00:29:13.413 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:13.413 Test: blockdev comparev and writev ...[2024-07-11 16:44:50.026839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0xb740d000 len:0x1000 00:29:13.413 [2024-07-11 16:44:50.027046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:13.413 passed 00:29:13.413 Test: blockdev nvme passthru rw ...passed 00:29:13.413 Test: blockdev nvme passthru vendor specific ...[2024-07-11 16:44:50.028165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:13.413 [2024-07-11 16:44:50.028313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:13.413 passed 00:29:13.413 Test: blockdev nvme admin passthru ...passed 00:29:13.413 Test: blockdev copy ...passed 00:29:13.413 00:29:13.413 Run Summary: Type Total Ran Passed Failed Inactive 00:29:13.413 suites 1 1 n/a 0 0 00:29:13.413 tests 23 23 23 0 0 00:29:13.413 asserts 152 152 152 0 n/a 00:29:13.413 00:29:13.413 Elapsed time = 0.176 seconds 00:29:13.413 0 00:29:13.413 16:44:50 -- bdev/blockdev.sh@293 -- # killprocess 140549 00:29:13.413 16:44:50 -- common/autotest_common.sh@926 -- # '[' -z 140549 ']' 00:29:13.413 16:44:50 -- common/autotest_common.sh@930 -- # kill -0 140549 00:29:13.413 16:44:50 -- common/autotest_common.sh@931 -- # uname 00:29:13.413 16:44:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:13.413 16:44:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140549 00:29:13.413 16:44:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:13.413 killing process with pid 140549 00:29:13.413 16:44:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:13.413 16:44:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140549' 00:29:13.413 16:44:50 -- common/autotest_common.sh@945 -- # kill 140549 00:29:13.413 16:44:50 -- common/autotest_common.sh@950 -- # wait 140549 00:29:14.347 ************************************ 00:29:14.347 END TEST bdev_bounds 00:29:14.347 ************************************ 00:29:14.347 16:44:50 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:14.347 00:29:14.347 real 0m2.059s 00:29:14.347 user 0m4.808s 00:29:14.347 sys 0m0.306s 00:29:14.347 16:44:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.347 16:44:50 -- common/autotest_common.sh@10 -- # set +x 00:29:14.347 16:44:51 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:14.347 16:44:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:29:14.347 16:44:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:14.347 16:44:51 -- common/autotest_common.sh@10 -- # set +x 00:29:14.347 ************************************ 00:29:14.347 START TEST bdev_nbd 00:29:14.347 ************************************ 00:29:14.347 16:44:51 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:14.347 16:44:51 -- bdev/blockdev.sh@298 -- # uname -s 00:29:14.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:14.347 16:44:51 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:14.347 16:44:51 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:14.347 16:44:51 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:14.347 16:44:51 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:29:14.347 16:44:51 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:14.347 16:44:51 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:14.347 16:44:51 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:14.347 16:44:51 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:29:14.347 16:44:51 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:14.347 16:44:51 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:14.347 16:44:51 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:29:14.347 16:44:51 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:14.347 16:44:51 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:29:14.347 16:44:51 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:14.347 16:44:51 -- bdev/blockdev.sh@316 -- # nbd_pid=140615 00:29:14.347 16:44:51 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:14.347 16:44:51 -- bdev/blockdev.sh@318 -- # waitforlisten 140615 /var/tmp/spdk-nbd.sock 00:29:14.347 16:44:51 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:14.347 16:44:51 -- common/autotest_common.sh@819 -- # '[' -z 140615 ']' 00:29:14.347 16:44:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:14.347 16:44:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:14.347 16:44:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:14.347 16:44:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:14.347 16:44:51 -- common/autotest_common.sh@10 -- # set +x 00:29:14.347 [2024-07-11 16:44:51.122047] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:14.347 [2024-07-11 16:44:51.122411] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.605 [2024-07-11 16:44:51.290036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.863 [2024-07-11 16:44:51.446481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.431 16:44:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:15.431 16:44:51 -- common/autotest_common.sh@852 -- # return 0 00:29:15.431 16:44:51 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@24 -- # local i 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:15.431 16:44:51 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:15.690 16:44:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:15.690 16:44:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:15.690 16:44:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:15.690 16:44:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:15.690 16:44:52 -- common/autotest_common.sh@857 -- # local i 00:29:15.690 16:44:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:15.690 16:44:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:15.690 16:44:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:15.690 16:44:52 -- common/autotest_common.sh@861 -- # break 00:29:15.690 16:44:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:15.690 16:44:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:15.690 16:44:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:15.690 1+0 records in 00:29:15.690 1+0 records out 00:29:15.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596878 s, 6.9 MB/s 00:29:15.690 16:44:52 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:15.690 16:44:52 -- common/autotest_common.sh@874 -- # size=4096 00:29:15.690 16:44:52 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:15.690 16:44:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:15.690 16:44:52 -- common/autotest_common.sh@877 -- # return 0 00:29:15.690 16:44:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:15.690 16:44:52 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:15.690 16:44:52 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:15.948 16:44:52 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:15.948 { 00:29:15.948 "nbd_device": "/dev/nbd0", 00:29:15.948 "bdev_name": "Nvme0n1" 00:29:15.948 } 00:29:15.948 ]' 00:29:15.948 16:44:52 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:15.948 16:44:52 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:15.948 { 00:29:15.948 "nbd_device": "/dev/nbd0", 00:29:15.948 "bdev_name": "Nvme0n1" 00:29:15.948 } 00:29:15.949 ]' 00:29:15.949 16:44:52 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:15.949 16:44:52 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:15.949 16:44:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:15.949 16:44:52 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:15.949 16:44:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:15.949 16:44:52 -- bdev/nbd_common.sh@51 -- # local i 00:29:15.949 16:44:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:15.949 16:44:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:15.949 16:44:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@41 -- # break 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@45 -- # return 0 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:16.207 16:44:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:16.473 16:44:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@65 -- # true 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@65 -- # count=0 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@122 -- # count=0 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@127 -- # return 0 00:29:16.474 16:44:53 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@12 -- # local i 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:16.474 16:44:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:16.731 /dev/nbd0 00:29:16.731 16:44:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:16.731 16:44:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:16.731 16:44:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:16.731 16:44:53 -- common/autotest_common.sh@857 -- # local i 00:29:16.731 16:44:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:16.731 16:44:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:16.731 16:44:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:16.731 16:44:53 -- common/autotest_common.sh@861 -- # break 00:29:16.731 16:44:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:16.731 16:44:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:16.731 16:44:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:16.731 1+0 records in 00:29:16.731 1+0 records out 00:29:16.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696047 s, 5.9 MB/s 00:29:16.731 16:44:53 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:16.731 16:44:53 -- common/autotest_common.sh@874 -- # size=4096 00:29:16.731 16:44:53 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:16.731 16:44:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:16.731 16:44:53 -- common/autotest_common.sh@877 -- # return 0 00:29:16.731 16:44:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:16.731 16:44:53 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:16.731 16:44:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:16.731 16:44:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:16.731 16:44:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:16.989 { 00:29:16.989 "nbd_device": "/dev/nbd0", 00:29:16.989 "bdev_name": "Nvme0n1" 00:29:16.989 } 00:29:16.989 ]' 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:16.989 { 00:29:16.989 "nbd_device": "/dev/nbd0", 00:29:16.989 "bdev_name": "Nvme0n1" 00:29:16.989 } 00:29:16.989 ]' 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@65 -- # count=1 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@95 -- # count=1 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:16.989 256+0 records in 00:29:16.989 256+0 records out 00:29:16.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00879033 s, 119 MB/s 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:16.989 256+0 records in 00:29:16.989 256+0 records out 00:29:16.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0667354 s, 15.7 MB/s 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@51 -- # local i 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:16.989 16:44:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:17.247 16:44:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:17.247 16:44:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:17.247 16:44:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:17.247 16:44:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:17.247 16:44:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:17.247 16:44:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:17.247 16:44:54 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:17.505 16:44:54 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:17.505 16:44:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:17.505 16:44:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:17.505 16:44:54 -- bdev/nbd_common.sh@41 -- # break 00:29:17.505 16:44:54 -- bdev/nbd_common.sh@45 -- # return 0 00:29:17.505 16:44:54 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:17.505 16:44:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:17.505 16:44:54 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@65 -- # true 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@65 -- # count=0 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@104 -- # count=0 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@109 -- # return 0 00:29:17.762 16:44:54 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:17.762 16:44:54 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:18.019 malloc_lvol_verify 00:29:18.019 16:44:54 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:18.276 0a407abc-fc75-4e76-8989-9b1a334b3a19 00:29:18.276 16:44:54 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:18.276 f9cf9867-0a38-4b70-94dd-afbeabc95a9c 00:29:18.276 16:44:55 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:18.543 /dev/nbd0 00:29:18.543 16:44:55 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:18.543 mke2fs 1.45.5 (07-Jan-2020) 00:29:18.543 00:29:18.543 Filesystem too small for a journal 00:29:18.543 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:18.543 00:29:18.543 Allocating group tables: 0/1 done 00:29:18.543 Writing inode tables: 0/1 done 00:29:18.543 Writing superblocks and filesystem accounting information: 0/1 done 00:29:18.543 00:29:18.543 16:44:55 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:18.543 16:44:55 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:18.543 16:44:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:18.543 16:44:55 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:18.543 16:44:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:18.543 16:44:55 -- bdev/nbd_common.sh@51 -- # local i 00:29:18.543 16:44:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:18.543 16:44:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:18.800 16:44:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:18.800 16:44:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:18.800 16:44:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:18.800 16:44:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:18.800 16:44:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:18.800 16:44:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:18.800 16:44:55 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:19.057 16:44:55 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:19.057 16:44:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:19.057 16:44:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:19.057 16:44:55 -- bdev/nbd_common.sh@41 -- # break 00:29:19.057 16:44:55 -- bdev/nbd_common.sh@45 -- # return 0 00:29:19.057 16:44:55 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:19.057 16:44:55 -- bdev/nbd_common.sh@147 -- # return 0 00:29:19.057 16:44:55 -- bdev/blockdev.sh@324 -- # killprocess 140615 00:29:19.057 16:44:55 -- common/autotest_common.sh@926 -- # '[' -z 140615 ']' 00:29:19.057 16:44:55 -- common/autotest_common.sh@930 -- # kill -0 140615 00:29:19.057 16:44:55 -- common/autotest_common.sh@931 -- # uname 00:29:19.057 16:44:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:19.057 16:44:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 140615 00:29:19.057 killing process with pid 140615 00:29:19.057 16:44:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:19.057 16:44:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:19.057 16:44:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 140615' 00:29:19.057 16:44:55 -- common/autotest_common.sh@945 -- # kill 140615 00:29:19.057 16:44:55 -- common/autotest_common.sh@950 -- # wait 140615 00:29:19.991 ************************************ 00:29:19.991 END TEST bdev_nbd 00:29:19.991 ************************************ 00:29:19.991 16:44:56 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:19.991 00:29:19.991 real 0m5.452s 00:29:19.991 user 0m7.847s 00:29:19.991 sys 0m1.024s 00:29:19.991 16:44:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.991 16:44:56 -- common/autotest_common.sh@10 -- # set +x 00:29:19.991 skipping fio tests on NVMe due to multi-ns failures. 00:29:19.991 16:44:56 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:19.991 16:44:56 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:29:19.991 16:44:56 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:19.991 16:44:56 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:19.991 16:44:56 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:19.991 16:44:56 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:19.991 16:44:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:19.991 16:44:56 -- common/autotest_common.sh@10 -- # set +x 00:29:19.991 ************************************ 00:29:19.991 START TEST bdev_verify 00:29:19.991 ************************************ 00:29:19.991 16:44:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:19.991 [2024-07-11 16:44:56.624382] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:19.991 [2024-07-11 16:44:56.624794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140836 ] 00:29:19.991 [2024-07-11 16:44:56.796785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:20.250 [2024-07-11 16:44:56.961859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.250 [2024-07-11 16:44:56.961865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.818 Running I/O for 5 seconds... 00:29:26.143 00:29:26.143 Latency(us) 00:29:26.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.143 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:26.143 Verification LBA range: start 0x0 length 0xa0000 00:29:26.143 Nvme0n1 : 5.01 18963.26 74.08 0.00 0.00 6719.32 333.27 16086.11 00:29:26.143 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:26.143 Verification LBA range: start 0xa0000 length 0xa0000 00:29:26.143 Nvme0n1 : 5.01 18975.75 74.12 0.00 0.00 6715.12 357.47 14715.81 00:29:26.143 =================================================================================================================== 00:29:26.143 Total : 37939.01 148.20 0.00 0.00 6717.22 333.27 16086.11 00:29:36.144 ************************************ 00:29:36.144 END TEST bdev_verify 00:29:36.144 ************************************ 00:29:36.144 00:29:36.144 real 0m15.157s 00:29:36.144 user 0m29.150s 00:29:36.144 sys 0m0.332s 00:29:36.144 16:45:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:36.144 16:45:11 -- common/autotest_common.sh@10 -- # set +x 00:29:36.145 16:45:11 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:36.145 16:45:11 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:29:36.145 16:45:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:36.145 16:45:11 -- common/autotest_common.sh@10 -- # set +x 00:29:36.145 ************************************ 00:29:36.145 START TEST bdev_verify_big_io 00:29:36.145 ************************************ 00:29:36.145 16:45:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:36.145 [2024-07-11 16:45:11.825634] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:36.145 [2024-07-11 16:45:11.826009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141059 ] 00:29:36.145 [2024-07-11 16:45:11.994278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:36.145 [2024-07-11 16:45:12.149544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.145 [2024-07-11 16:45:12.149554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.145 Running I/O for 5 seconds... 00:29:41.411 00:29:41.411 Latency(us) 00:29:41.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.411 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:41.411 Verification LBA range: start 0x0 length 0xa000 00:29:41.411 Nvme0n1 : 5.03 2054.08 128.38 0.00 0.00 61515.72 606.95 97708.22 00:29:41.411 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:41.411 Verification LBA range: start 0xa000 length 0xa000 00:29:41.411 Nvme0n1 : 5.04 2121.44 132.59 0.00 0.00 59572.43 472.90 80549.70 00:29:41.411 =================================================================================================================== 00:29:41.411 Total : 4175.53 260.97 0.00 0.00 60528.31 472.90 97708.22 00:29:42.347 ************************************ 00:29:42.347 END TEST bdev_verify_big_io 00:29:42.347 ************************************ 00:29:42.347 00:29:42.347 real 0m7.083s 00:29:42.347 user 0m13.106s 00:29:42.347 sys 0m0.241s 00:29:42.347 16:45:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.347 16:45:18 -- common/autotest_common.sh@10 -- # set +x 00:29:42.347 16:45:18 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:42.347 16:45:18 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:42.347 16:45:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:42.347 16:45:18 -- common/autotest_common.sh@10 -- # set +x 00:29:42.347 ************************************ 00:29:42.347 START TEST bdev_write_zeroes 00:29:42.347 ************************************ 00:29:42.347 16:45:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:42.347 [2024-07-11 16:45:18.963842] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:42.347 [2024-07-11 16:45:18.964217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141172 ] 00:29:42.347 [2024-07-11 16:45:19.130773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.605 [2024-07-11 16:45:19.283998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.864 Running I/O for 1 seconds... 00:29:44.237 00:29:44.237 Latency(us) 00:29:44.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.237 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:44.237 Nvme0n1 : 1.00 65251.91 254.89 0.00 0.00 1956.54 647.91 10902.81 00:29:44.237 =================================================================================================================== 00:29:44.237 Total : 65251.91 254.89 0.00 0.00 1956.54 647.91 10902.81 00:29:44.804 ************************************ 00:29:44.804 END TEST bdev_write_zeroes 00:29:44.804 ************************************ 00:29:44.804 00:29:44.804 real 0m2.666s 00:29:44.804 user 0m2.339s 00:29:44.804 sys 0m0.224s 00:29:44.804 16:45:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.804 16:45:21 -- common/autotest_common.sh@10 -- # set +x 00:29:44.804 16:45:21 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:44.804 16:45:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:44.804 16:45:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:44.804 16:45:21 -- common/autotest_common.sh@10 -- # set +x 00:29:45.063 ************************************ 00:29:45.063 START TEST bdev_json_nonenclosed 00:29:45.063 ************************************ 00:29:45.063 16:45:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:45.063 [2024-07-11 16:45:21.658390] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:45.063 [2024-07-11 16:45:21.658763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141228 ] 00:29:45.063 [2024-07-11 16:45:21.809790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.321 [2024-07-11 16:45:21.963755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.321 [2024-07-11 16:45:21.964153] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:45.321 [2024-07-11 16:45:21.964293] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:45.581 ************************************ 00:29:45.581 END TEST bdev_json_nonenclosed 00:29:45.581 ************************************ 00:29:45.581 00:29:45.581 real 0m0.652s 00:29:45.581 user 0m0.436s 00:29:45.581 sys 0m0.116s 00:29:45.581 16:45:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:45.581 16:45:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.581 16:45:22 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:45.581 16:45:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:29:45.581 16:45:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:45.581 16:45:22 -- common/autotest_common.sh@10 -- # set +x 00:29:45.581 ************************************ 00:29:45.581 START TEST bdev_json_nonarray 00:29:45.581 ************************************ 00:29:45.581 16:45:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:45.581 [2024-07-11 16:45:22.359933] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:45.581 [2024-07-11 16:45:22.360244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141250 ] 00:29:45.840 [2024-07-11 16:45:22.509595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.098 [2024-07-11 16:45:22.682428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.098 [2024-07-11 16:45:22.682766] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:46.098 [2024-07-11 16:45:22.682906] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:46.365 ************************************ 00:29:46.365 END TEST bdev_json_nonarray 00:29:46.365 ************************************ 00:29:46.365 00:29:46.365 real 0m0.682s 00:29:46.365 user 0m0.465s 00:29:46.365 sys 0m0.116s 00:29:46.365 16:45:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.365 16:45:22 -- common/autotest_common.sh@10 -- # set +x 00:29:46.365 16:45:23 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:29:46.365 16:45:23 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:29:46.365 16:45:23 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:29:46.365 16:45:23 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:29:46.365 16:45:23 -- bdev/blockdev.sh@809 -- # cleanup 00:29:46.365 16:45:23 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:46.365 16:45:23 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:46.365 16:45:23 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:29:46.365 16:45:23 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:29:46.365 16:45:23 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:29:46.365 16:45:23 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:29:46.365 ************************************ 00:29:46.365 END TEST blockdev_nvme 00:29:46.365 ************************************ 00:29:46.365 00:29:46.365 real 0m39.548s 00:29:46.365 user 1m3.731s 00:29:46.365 sys 0m3.227s 00:29:46.365 16:45:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:46.365 16:45:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.365 16:45:23 -- spdk/autotest.sh@219 -- # uname -s 00:29:46.365 16:45:23 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:29:46.365 16:45:23 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:46.365 16:45:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:46.365 16:45:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:46.365 16:45:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.365 ************************************ 00:29:46.365 START TEST blockdev_nvme_gpt 00:29:46.365 ************************************ 00:29:46.365 16:45:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:46.365 * Looking for test storage... 00:29:46.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:46.365 16:45:23 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:46.365 16:45:23 -- bdev/nbd_common.sh@6 -- # set -e 00:29:46.365 16:45:23 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:46.365 16:45:23 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:46.365 16:45:23 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:46.365 16:45:23 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:46.365 16:45:23 -- bdev/blockdev.sh@18 -- # : 00:29:46.365 16:45:23 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:46.365 16:45:23 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:46.366 16:45:23 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:46.366 16:45:23 -- bdev/blockdev.sh@672 -- # uname -s 00:29:46.366 16:45:23 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:46.366 16:45:23 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:46.366 16:45:23 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:29:46.366 16:45:23 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:46.366 16:45:23 -- bdev/blockdev.sh@682 -- # dek= 00:29:46.366 16:45:23 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:46.366 16:45:23 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:46.366 16:45:23 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:46.366 16:45:23 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:29:46.366 16:45:23 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:29:46.366 16:45:23 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:46.366 16:45:23 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=141333 00:29:46.366 16:45:23 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:46.366 16:45:23 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:46.366 16:45:23 -- bdev/blockdev.sh@47 -- # waitforlisten 141333 00:29:46.366 16:45:23 -- common/autotest_common.sh@819 -- # '[' -z 141333 ']' 00:29:46.366 16:45:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.366 16:45:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:46.366 16:45:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.366 16:45:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:46.366 16:45:23 -- common/autotest_common.sh@10 -- # set +x 00:29:46.625 [2024-07-11 16:45:23.240460] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:46.625 [2024-07-11 16:45:23.240953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141333 ] 00:29:46.625 [2024-07-11 16:45:23.397045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.882 [2024-07-11 16:45:23.555089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:46.882 [2024-07-11 16:45:23.555566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.257 16:45:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:48.257 16:45:24 -- common/autotest_common.sh@852 -- # return 0 00:29:48.257 16:45:24 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:48.257 16:45:24 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:29:48.257 16:45:24 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:48.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:48.517 Waiting for block devices as requested 00:29:48.517 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:48.517 16:45:25 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:29:48.517 16:45:25 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:29:48.517 16:45:25 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:29:48.517 16:45:25 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:29:48.517 16:45:25 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:29:48.517 16:45:25 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:29:48.517 16:45:25 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:29:48.517 16:45:25 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:48.517 16:45:25 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:29:48.517 16:45:25 -- bdev/blockdev.sh@105 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:29:48.517 16:45:25 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:29:48.517 16:45:25 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:29:48.517 16:45:25 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:29:48.517 16:45:25 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:29:48.517 16:45:25 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:29:48.517 16:45:25 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:29:48.776 16:45:25 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:29:48.776 BYT; 00:29:48.776 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:29:48.776 16:45:25 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:29:48.776 BYT; 00:29:48.776 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:29:48.776 16:45:25 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:29:48.776 16:45:25 -- bdev/blockdev.sh@114 -- # break 00:29:48.776 16:45:25 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:29:48.776 16:45:25 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:29:48.776 16:45:25 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:29:48.777 16:45:25 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:29:49.344 16:45:26 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:29:49.344 16:45:26 -- scripts/common.sh@410 -- # local spdk_guid 00:29:49.344 16:45:26 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:49.344 16:45:26 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:49.344 16:45:26 -- scripts/common.sh@415 -- # IFS='()' 00:29:49.344 16:45:26 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:29:49.344 16:45:26 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:49.344 16:45:26 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:29:49.344 16:45:26 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:49.344 16:45:26 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:49.344 16:45:26 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:49.344 16:45:26 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:29:49.344 16:45:26 -- scripts/common.sh@422 -- # local spdk_guid 00:29:49.344 16:45:26 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:49.344 16:45:26 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:49.344 16:45:26 -- scripts/common.sh@427 -- # IFS='()' 00:29:49.344 16:45:26 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:29:49.344 16:45:26 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:49.344 16:45:26 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:29:49.344 16:45:26 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:49.344 16:45:26 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:49.344 16:45:26 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:49.344 16:45:26 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:29:50.721 The operation has completed successfully. 00:29:50.721 16:45:27 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:29:51.658 The operation has completed successfully. 00:29:51.658 16:45:28 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:51.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:29:51.917 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:29:52.852 16:45:29 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:29:52.852 16:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.852 16:45:29 -- common/autotest_common.sh@10 -- # set +x 00:29:52.852 [] 00:29:52.852 16:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.852 16:45:29 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:29:52.852 16:45:29 -- bdev/blockdev.sh@79 -- # local json 00:29:52.852 16:45:29 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:29:52.852 16:45:29 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:52.852 16:45:29 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:29:52.852 16:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.852 16:45:29 -- common/autotest_common.sh@10 -- # set +x 00:29:52.852 16:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.852 16:45:29 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:52.852 16:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.852 16:45:29 -- common/autotest_common.sh@10 -- # set +x 00:29:52.852 16:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.852 16:45:29 -- bdev/blockdev.sh@738 -- # cat 00:29:52.852 16:45:29 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:52.852 16:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.852 16:45:29 -- common/autotest_common.sh@10 -- # set +x 00:29:52.852 16:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.852 16:45:29 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:52.852 16:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.852 16:45:29 -- common/autotest_common.sh@10 -- # set +x 00:29:52.852 16:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.852 16:45:29 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:52.852 16:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.852 16:45:29 -- common/autotest_common.sh@10 -- # set +x 00:29:52.852 16:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:52.852 16:45:29 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:52.852 16:45:29 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:52.852 16:45:29 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:52.852 16:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:52.852 16:45:29 -- common/autotest_common.sh@10 -- # set +x 00:29:52.852 16:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:53.110 16:45:29 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:53.110 16:45:29 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:53.110 16:45:29 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:29:53.110 16:45:29 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:53.110 16:45:29 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:29:53.110 16:45:29 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:53.110 16:45:29 -- bdev/blockdev.sh@752 -- # killprocess 141333 00:29:53.110 16:45:29 -- common/autotest_common.sh@926 -- # '[' -z 141333 ']' 00:29:53.110 16:45:29 -- common/autotest_common.sh@930 -- # kill -0 141333 00:29:53.110 16:45:29 -- common/autotest_common.sh@931 -- # uname 00:29:53.110 16:45:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:53.110 16:45:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141333 00:29:53.110 16:45:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:53.110 killing process with pid 141333 00:29:53.110 16:45:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:53.110 16:45:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141333' 00:29:53.110 16:45:29 -- common/autotest_common.sh@945 -- # kill 141333 00:29:53.110 16:45:29 -- common/autotest_common.sh@950 -- # wait 141333 00:29:55.014 16:45:31 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:55.014 16:45:31 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:29:55.014 16:45:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:29:55.014 16:45:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:55.014 16:45:31 -- common/autotest_common.sh@10 -- # set +x 00:29:55.014 ************************************ 00:29:55.014 START TEST bdev_hello_world 00:29:55.014 ************************************ 00:29:55.014 16:45:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:29:55.015 [2024-07-11 16:45:31.554177] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:55.015 [2024-07-11 16:45:31.554363] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141886 ] 00:29:55.015 [2024-07-11 16:45:31.718325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.274 [2024-07-11 16:45:31.890062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.533 [2024-07-11 16:45:32.267368] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:55.533 [2024-07-11 16:45:32.267477] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:29:55.533 [2024-07-11 16:45:32.267524] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:55.533 [2024-07-11 16:45:32.270242] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:55.533 [2024-07-11 16:45:32.270875] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:55.533 [2024-07-11 16:45:32.270955] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:55.533 [2024-07-11 16:45:32.271373] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:55.533 00:29:55.533 [2024-07-11 16:45:32.271423] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:56.482 00:29:56.482 real 0m1.690s 00:29:56.482 user 0m1.391s 00:29:56.482 sys 0m0.200s 00:29:56.482 ************************************ 00:29:56.482 16:45:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.482 16:45:33 -- common/autotest_common.sh@10 -- # set +x 00:29:56.482 END TEST bdev_hello_world 00:29:56.482 ************************************ 00:29:56.482 16:45:33 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:56.482 16:45:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:56.482 16:45:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:56.482 16:45:33 -- common/autotest_common.sh@10 -- # set +x 00:29:56.482 ************************************ 00:29:56.482 START TEST bdev_bounds 00:29:56.482 ************************************ 00:29:56.482 16:45:33 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:29:56.482 16:45:33 -- bdev/blockdev.sh@288 -- # bdevio_pid=141936 00:29:56.482 16:45:33 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:56.482 Process bdevio pid: 141936 00:29:56.482 16:45:33 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 141936' 00:29:56.482 16:45:33 -- bdev/blockdev.sh@291 -- # waitforlisten 141936 00:29:56.482 16:45:33 -- common/autotest_common.sh@819 -- # '[' -z 141936 ']' 00:29:56.482 16:45:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.482 16:45:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:56.482 16:45:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.482 16:45:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:56.482 16:45:33 -- common/autotest_common.sh@10 -- # set +x 00:29:56.482 16:45:33 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:56.740 [2024-07-11 16:45:33.298934] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:56.740 [2024-07-11 16:45:33.299363] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141936 ] 00:29:56.740 [2024-07-11 16:45:33.473182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:56.998 [2024-07-11 16:45:33.646612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.998 [2024-07-11 16:45:33.646763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.998 [2024-07-11 16:45:33.646758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.564 16:45:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:57.564 16:45:34 -- common/autotest_common.sh@852 -- # return 0 00:29:57.564 16:45:34 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:57.564 I/O targets: 00:29:57.564 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:29:57.564 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:29:57.564 00:29:57.564 00:29:57.564 CUnit - A unit testing framework for C - Version 2.1-3 00:29:57.564 http://cunit.sourceforge.net/ 00:29:57.564 00:29:57.564 00:29:57.564 Suite: bdevio tests on: Nvme0n1p2 00:29:57.564 Test: blockdev write read block ...passed 00:29:57.564 Test: blockdev write zeroes read block ...passed 00:29:57.564 Test: blockdev write zeroes read no split ...passed 00:29:57.564 Test: blockdev write zeroes read split ...passed 00:29:57.823 Test: blockdev write zeroes read split partial ...passed 00:29:57.823 Test: blockdev reset ...[2024-07-11 16:45:34.381980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:57.823 [2024-07-11 16:45:34.385423] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:57.823 passed 00:29:57.823 Test: blockdev write read 8 blocks ...passed 00:29:57.823 Test: blockdev write read size > 128k ...passed 00:29:57.823 Test: blockdev write read invalid size ...passed 00:29:57.823 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:57.823 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:57.823 Test: blockdev write read max offset ...passed 00:29:57.823 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:57.823 Test: blockdev writev readv 8 blocks ...passed 00:29:57.823 Test: blockdev writev readv 30 x 1block ...passed 00:29:57.823 Test: blockdev writev readv block ...passed 00:29:57.823 Test: blockdev writev readv size > 128k ...passed 00:29:57.823 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:57.823 Test: blockdev comparev and writev ...[2024-07-11 16:45:34.394130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x11120b000 len:0x1000 00:29:57.823 [2024-07-11 16:45:34.394249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:57.823 passed 00:29:57.823 Test: blockdev nvme passthru rw ...passed 00:29:57.823 Test: blockdev nvme passthru vendor specific ...passed 00:29:57.823 Test: blockdev nvme admin passthru ...passed 00:29:57.823 Test: blockdev copy ...passed 00:29:57.823 Suite: bdevio tests on: Nvme0n1p1 00:29:57.823 Test: blockdev write read block ...passed 00:29:57.823 Test: blockdev write zeroes read block ...passed 00:29:57.823 Test: blockdev write zeroes read no split ...passed 00:29:57.823 Test: blockdev write zeroes read split ...passed 00:29:57.823 Test: blockdev write zeroes read split partial ...passed 00:29:57.823 Test: blockdev reset ...[2024-07-11 16:45:34.444156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:57.823 [2024-07-11 16:45:34.447289] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:57.823 passed 00:29:57.823 Test: blockdev write read 8 blocks ...passed 00:29:57.823 Test: blockdev write read size > 128k ...passed 00:29:57.823 Test: blockdev write read invalid size ...passed 00:29:57.823 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:57.823 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:57.823 Test: blockdev write read max offset ...passed 00:29:57.823 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:57.823 Test: blockdev writev readv 8 blocks ...passed 00:29:57.823 Test: blockdev writev readv 30 x 1block ...passed 00:29:57.823 Test: blockdev writev readv block ...passed 00:29:57.823 Test: blockdev writev readv size > 128k ...passed 00:29:57.823 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:57.823 Test: blockdev comparev and writev ...[2024-07-11 16:45:34.455515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x11120d000 len:0x1000 00:29:57.823 [2024-07-11 16:45:34.455614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:57.823 passed 00:29:57.823 Test: blockdev nvme passthru rw ...passed 00:29:57.823 Test: blockdev nvme passthru vendor specific ...passed 00:29:57.823 Test: blockdev nvme admin passthru ...passed 00:29:57.823 Test: blockdev copy ...passed 00:29:57.823 00:29:57.823 Run Summary: Type Total Ran Passed Failed Inactive 00:29:57.823 suites 2 2 n/a 0 0 00:29:57.823 tests 46 46 46 0 0 00:29:57.823 asserts 284 284 284 0 n/a 00:29:57.823 00:29:57.823 Elapsed time = 0.340 seconds 00:29:57.823 0 00:29:57.823 16:45:34 -- bdev/blockdev.sh@293 -- # killprocess 141936 00:29:57.823 16:45:34 -- common/autotest_common.sh@926 -- # '[' -z 141936 ']' 00:29:57.823 16:45:34 -- common/autotest_common.sh@930 -- # kill -0 141936 00:29:57.823 16:45:34 -- common/autotest_common.sh@931 -- # uname 00:29:57.823 16:45:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:57.823 16:45:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 141936 00:29:57.823 16:45:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:57.823 16:45:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:57.823 killing process with pid 141936 00:29:57.823 16:45:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 141936' 00:29:57.823 16:45:34 -- common/autotest_common.sh@945 -- # kill 141936 00:29:57.823 16:45:34 -- common/autotest_common.sh@950 -- # wait 141936 00:29:58.758 16:45:35 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:58.758 00:29:58.758 real 0m2.112s 00:29:58.758 user 0m4.957s 00:29:58.758 sys 0m0.360s 00:29:58.758 ************************************ 00:29:58.758 END TEST bdev_bounds 00:29:58.758 ************************************ 00:29:58.758 16:45:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:58.758 16:45:35 -- common/autotest_common.sh@10 -- # set +x 00:29:58.758 16:45:35 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:29:58.758 16:45:35 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:29:58.758 16:45:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:58.758 16:45:35 -- common/autotest_common.sh@10 -- # set +x 00:29:58.758 ************************************ 00:29:58.758 START TEST bdev_nbd 00:29:58.758 ************************************ 00:29:58.758 16:45:35 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:29:58.758 16:45:35 -- bdev/blockdev.sh@298 -- # uname -s 00:29:58.758 16:45:35 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:58.758 16:45:35 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:58.758 16:45:35 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:58.758 16:45:35 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:29:58.758 16:45:35 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:58.758 16:45:35 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:29:58.758 16:45:35 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:58.758 16:45:35 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:29:58.758 16:45:35 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:58.758 16:45:35 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:29:58.758 16:45:35 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:29:58.758 16:45:35 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:58.758 16:45:35 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:29:58.758 16:45:35 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:58.758 16:45:35 -- bdev/blockdev.sh@316 -- # nbd_pid=142018 00:29:58.758 16:45:35 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:58.758 16:45:35 -- bdev/blockdev.sh@318 -- # waitforlisten 142018 /var/tmp/spdk-nbd.sock 00:29:58.758 16:45:35 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:58.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:58.758 16:45:35 -- common/autotest_common.sh@819 -- # '[' -z 142018 ']' 00:29:58.758 16:45:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:58.758 16:45:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:58.758 16:45:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:58.758 16:45:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:58.758 16:45:35 -- common/autotest_common.sh@10 -- # set +x 00:29:58.758 [2024-07-11 16:45:35.454668] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:29:58.758 [2024-07-11 16:45:35.454839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:59.016 [2024-07-11 16:45:35.611296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.016 [2024-07-11 16:45:35.783333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:59.951 16:45:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:59.951 16:45:36 -- common/autotest_common.sh@852 -- # return 0 00:29:59.951 16:45:36 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@24 -- # local i 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:59.951 16:45:36 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:29:59.951 16:45:36 -- common/autotest_common.sh@857 -- # local i 00:29:59.951 16:45:36 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:29:59.951 16:45:36 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:29:59.951 16:45:36 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:29:59.951 16:45:36 -- common/autotest_common.sh@861 -- # break 00:29:59.951 16:45:36 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:29:59.951 16:45:36 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:29:59.951 16:45:36 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:59.951 1+0 records in 00:29:59.951 1+0 records out 00:29:59.951 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519839 s, 7.9 MB/s 00:29:59.951 16:45:36 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.951 16:45:36 -- common/autotest_common.sh@874 -- # size=4096 00:29:59.951 16:45:36 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:59.951 16:45:36 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:29:59.951 16:45:36 -- common/autotest_common.sh@877 -- # return 0 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:29:59.951 16:45:36 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:30:00.209 16:45:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:30:00.210 16:45:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:30:00.210 16:45:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:30:00.210 16:45:36 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:30:00.210 16:45:36 -- common/autotest_common.sh@857 -- # local i 00:30:00.210 16:45:36 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:00.210 16:45:36 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:00.210 16:45:36 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:30:00.210 16:45:36 -- common/autotest_common.sh@861 -- # break 00:30:00.210 16:45:36 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:00.210 16:45:36 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:00.210 16:45:36 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:00.210 1+0 records in 00:30:00.210 1+0 records out 00:30:00.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602529 s, 6.8 MB/s 00:30:00.210 16:45:36 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.210 16:45:36 -- common/autotest_common.sh@874 -- # size=4096 00:30:00.210 16:45:36 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.210 16:45:36 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:00.210 16:45:36 -- common/autotest_common.sh@877 -- # return 0 00:30:00.210 16:45:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:30:00.210 16:45:36 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:30:00.210 16:45:36 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:00.468 16:45:37 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:30:00.468 { 00:30:00.468 "nbd_device": "/dev/nbd0", 00:30:00.468 "bdev_name": "Nvme0n1p1" 00:30:00.468 }, 00:30:00.468 { 00:30:00.468 "nbd_device": "/dev/nbd1", 00:30:00.468 "bdev_name": "Nvme0n1p2" 00:30:00.468 } 00:30:00.468 ]' 00:30:00.468 16:45:37 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:30:00.468 16:45:37 -- bdev/nbd_common.sh@119 -- # echo '[ 00:30:00.468 { 00:30:00.469 "nbd_device": "/dev/nbd0", 00:30:00.469 "bdev_name": "Nvme0n1p1" 00:30:00.469 }, 00:30:00.469 { 00:30:00.469 "nbd_device": "/dev/nbd1", 00:30:00.469 "bdev_name": "Nvme0n1p2" 00:30:00.469 } 00:30:00.469 ]' 00:30:00.469 16:45:37 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:30:00.469 16:45:37 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:00.469 16:45:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:00.469 16:45:37 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:00.469 16:45:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:00.469 16:45:37 -- bdev/nbd_common.sh@51 -- # local i 00:30:00.469 16:45:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:00.469 16:45:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:00.726 16:45:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:00.726 16:45:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:00.726 16:45:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:00.726 16:45:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:00.726 16:45:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:00.726 16:45:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:00.726 16:45:37 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@41 -- # break 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@45 -- # return 0 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@41 -- # break 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@45 -- # return 0 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:00.983 16:45:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:01.240 16:45:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:01.240 16:45:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:01.240 16:45:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:01.240 16:45:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:01.240 16:45:37 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:01.240 16:45:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@65 -- # true 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@65 -- # count=0 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@122 -- # count=0 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@127 -- # return 0 00:30:01.240 16:45:38 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@12 -- # local i 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:01.240 16:45:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:30:01.497 /dev/nbd0 00:30:01.497 16:45:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:01.497 16:45:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:01.497 16:45:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:30:01.497 16:45:38 -- common/autotest_common.sh@857 -- # local i 00:30:01.497 16:45:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:01.497 16:45:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:01.497 16:45:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:30:01.497 16:45:38 -- common/autotest_common.sh@861 -- # break 00:30:01.497 16:45:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:01.497 16:45:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:01.497 16:45:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:01.497 1+0 records in 00:30:01.497 1+0 records out 00:30:01.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463813 s, 8.8 MB/s 00:30:01.497 16:45:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.497 16:45:38 -- common/autotest_common.sh@874 -- # size=4096 00:30:01.497 16:45:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.497 16:45:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:01.497 16:45:38 -- common/autotest_common.sh@877 -- # return 0 00:30:01.497 16:45:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:01.497 16:45:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:01.497 16:45:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:30:01.754 /dev/nbd1 00:30:01.754 16:45:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:01.754 16:45:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:01.754 16:45:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:30:01.754 16:45:38 -- common/autotest_common.sh@857 -- # local i 00:30:01.754 16:45:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:30:01.754 16:45:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:30:01.754 16:45:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:30:01.754 16:45:38 -- common/autotest_common.sh@861 -- # break 00:30:01.754 16:45:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:30:01.754 16:45:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:30:01.754 16:45:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:01.754 1+0 records in 00:30:01.754 1+0 records out 00:30:01.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431678 s, 9.5 MB/s 00:30:01.754 16:45:38 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.754 16:45:38 -- common/autotest_common.sh@874 -- # size=4096 00:30:01.754 16:45:38 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.754 16:45:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:30:01.754 16:45:38 -- common/autotest_common.sh@877 -- # return 0 00:30:01.754 16:45:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:01.754 16:45:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:01.754 16:45:38 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:01.754 16:45:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:01.754 16:45:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:30:02.012 { 00:30:02.012 "nbd_device": "/dev/nbd0", 00:30:02.012 "bdev_name": "Nvme0n1p1" 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "nbd_device": "/dev/nbd1", 00:30:02.012 "bdev_name": "Nvme0n1p2" 00:30:02.012 } 00:30:02.012 ]' 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:30:02.012 { 00:30:02.012 "nbd_device": "/dev/nbd0", 00:30:02.012 "bdev_name": "Nvme0n1p1" 00:30:02.012 }, 00:30:02.012 { 00:30:02.012 "nbd_device": "/dev/nbd1", 00:30:02.012 "bdev_name": "Nvme0n1p2" 00:30:02.012 } 00:30:02.012 ]' 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:30:02.012 /dev/nbd1' 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:30:02.012 /dev/nbd1' 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@65 -- # count=2 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@95 -- # count=2 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:30:02.012 256+0 records in 00:30:02.012 256+0 records out 00:30:02.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00733075 s, 143 MB/s 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:30:02.012 256+0 records in 00:30:02.012 256+0 records out 00:30:02.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0722949 s, 14.5 MB/s 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:30:02.012 16:45:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:30:02.271 256+0 records in 00:30:02.271 256+0 records out 00:30:02.271 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0834863 s, 12.6 MB/s 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@51 -- # local i 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:02.271 16:45:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:02.529 16:45:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:02.529 16:45:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:02.529 16:45:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:02.529 16:45:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.529 16:45:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.529 16:45:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:02.529 16:45:39 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:02.530 16:45:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:02.530 16:45:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.530 16:45:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:02.530 16:45:39 -- bdev/nbd_common.sh@41 -- # break 00:30:02.530 16:45:39 -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.530 16:45:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:02.530 16:45:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@41 -- # break 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@45 -- # return 0 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:02.788 16:45:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:30:03.046 16:45:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:30:03.046 16:45:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:30:03.046 16:45:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@65 -- # true 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@65 -- # count=0 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@104 -- # count=0 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@109 -- # return 0 00:30:03.305 16:45:39 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:30:03.305 16:45:39 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:30:03.563 malloc_lvol_verify 00:30:03.563 16:45:40 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:30:03.821 10b7ca30-1426-4269-ae37-1f1583b6c0c4 00:30:03.822 16:45:40 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:30:03.822 8d6431c6-2e18-48d7-9e8b-024972ae1b15 00:30:03.822 16:45:40 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:30:04.080 /dev/nbd0 00:30:04.080 16:45:40 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:30:04.080 mke2fs 1.45.5 (07-Jan-2020) 00:30:04.080 00:30:04.080 Filesystem too small for a journal 00:30:04.080 Creating filesystem with 1024 4k blocks and 1024 inodes 00:30:04.080 00:30:04.080 Allocating group tables: 0/1 done 00:30:04.080 Writing inode tables: 0/1 done 00:30:04.080 Writing superblocks and filesystem accounting information: 0/1 done 00:30:04.080 00:30:04.080 16:45:40 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:30:04.080 16:45:40 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:30:04.080 16:45:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:30:04.080 16:45:40 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:04.080 16:45:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:04.080 16:45:40 -- bdev/nbd_common.sh@51 -- # local i 00:30:04.080 16:45:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:04.080 16:45:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:30:04.339 16:45:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:04.339 16:45:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:04.339 16:45:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:04.339 16:45:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:04.339 16:45:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:04.339 16:45:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:04.339 16:45:41 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:04.339 16:45:41 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:04.339 16:45:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:04.339 16:45:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:04.598 16:45:41 -- bdev/nbd_common.sh@41 -- # break 00:30:04.598 16:45:41 -- bdev/nbd_common.sh@45 -- # return 0 00:30:04.598 16:45:41 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:30:04.598 16:45:41 -- bdev/nbd_common.sh@147 -- # return 0 00:30:04.598 16:45:41 -- bdev/blockdev.sh@324 -- # killprocess 142018 00:30:04.598 16:45:41 -- common/autotest_common.sh@926 -- # '[' -z 142018 ']' 00:30:04.598 16:45:41 -- common/autotest_common.sh@930 -- # kill -0 142018 00:30:04.598 16:45:41 -- common/autotest_common.sh@931 -- # uname 00:30:04.598 16:45:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:04.598 16:45:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142018 00:30:04.598 16:45:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:04.598 16:45:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:04.598 16:45:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142018' 00:30:04.598 killing process with pid 142018 00:30:04.598 16:45:41 -- common/autotest_common.sh@945 -- # kill 142018 00:30:04.598 16:45:41 -- common/autotest_common.sh@950 -- # wait 142018 00:30:05.535 ************************************ 00:30:05.535 END TEST bdev_nbd 00:30:05.535 ************************************ 00:30:05.535 16:45:42 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:30:05.535 00:30:05.535 real 0m6.738s 00:30:05.535 user 0m9.518s 00:30:05.535 sys 0m1.510s 00:30:05.535 16:45:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:05.535 16:45:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.535 16:45:42 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:30:05.535 16:45:42 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:30:05.535 skipping fio tests on NVMe due to multi-ns failures. 00:30:05.535 16:45:42 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:30:05.535 16:45:42 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:30:05.535 16:45:42 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:05.535 16:45:42 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:05.535 16:45:42 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:05.535 16:45:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:05.535 16:45:42 -- common/autotest_common.sh@10 -- # set +x 00:30:05.535 ************************************ 00:30:05.535 START TEST bdev_verify 00:30:05.535 ************************************ 00:30:05.535 16:45:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:30:05.535 [2024-07-11 16:45:42.239720] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:05.535 [2024-07-11 16:45:42.239899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142286 ] 00:30:05.794 [2024-07-11 16:45:42.393263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:05.794 [2024-07-11 16:45:42.567875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.794 [2024-07-11 16:45:42.567879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.362 Running I/O for 5 seconds... 00:30:11.623 00:30:11.623 Latency(us) 00:30:11.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.623 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:11.623 Verification LBA range: start 0x0 length 0x4ff80 00:30:11.623 Nvme0n1p1 : 5.01 8012.82 31.30 0.00 0.00 15936.12 1437.32 18945.86 00:30:11.623 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:11.623 Verification LBA range: start 0x4ff80 length 0x4ff80 00:30:11.623 Nvme0n1p1 : 5.02 7931.44 30.98 0.00 0.00 16081.36 350.02 30742.34 00:30:11.623 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:11.623 Verification LBA range: start 0x0 length 0x4ff7f 00:30:11.623 Nvme0n1p2 : 5.02 8010.04 31.29 0.00 0.00 15928.57 1980.97 17515.99 00:30:11.623 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:30:11.623 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:30:11.623 Nvme0n1p2 : 5.01 7920.01 30.94 0.00 0.00 16116.56 3455.53 32172.22 00:30:11.623 =================================================================================================================== 00:30:11.623 Total : 31874.30 124.51 0.00 0.00 16015.20 350.02 32172.22 00:30:16.885 00:30:16.885 real 0m10.786s 00:30:16.885 user 0m20.472s 00:30:16.885 sys 0m0.264s 00:30:16.885 16:45:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:16.885 16:45:52 -- common/autotest_common.sh@10 -- # set +x 00:30:16.885 ************************************ 00:30:16.885 END TEST bdev_verify 00:30:16.885 ************************************ 00:30:16.885 16:45:53 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:16.885 16:45:53 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:30:16.885 16:45:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:16.885 16:45:53 -- common/autotest_common.sh@10 -- # set +x 00:30:16.885 ************************************ 00:30:16.885 START TEST bdev_verify_big_io 00:30:16.885 ************************************ 00:30:16.885 16:45:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:16.885 [2024-07-11 16:45:53.074493] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:16.885 [2024-07-11 16:45:53.074939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142442 ] 00:30:16.885 [2024-07-11 16:45:53.232991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:16.885 [2024-07-11 16:45:53.407725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.885 [2024-07-11 16:45:53.407735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.143 Running I/O for 5 seconds... 00:30:22.448 00:30:22.448 Latency(us) 00:30:22.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.448 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:22.448 Verification LBA range: start 0x0 length 0x4ff8 00:30:22.448 Nvme0n1p1 : 5.10 937.46 58.59 0.00 0.00 135023.24 2517.18 205902.20 00:30:22.448 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:22.448 Verification LBA range: start 0x4ff8 length 0x4ff8 00:30:22.448 Nvme0n1p1 : 5.10 878.84 54.93 0.00 0.00 144075.76 3902.37 214481.45 00:30:22.448 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:22.448 Verification LBA range: start 0x0 length 0x4ff7 00:30:22.448 Nvme0n1p2 : 5.10 945.29 59.08 0.00 0.00 132374.50 707.49 157286.40 00:30:22.448 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:22.448 Verification LBA range: start 0x4ff7 length 0x4ff7 00:30:22.448 Nvme0n1p2 : 5.10 878.34 54.90 0.00 0.00 142228.78 3872.58 162052.65 00:30:22.448 =================================================================================================================== 00:30:22.448 Total : 3639.92 227.50 0.00 0.00 138258.92 707.49 214481.45 00:30:23.822 00:30:23.822 real 0m7.224s 00:30:23.822 user 0m13.377s 00:30:23.822 sys 0m0.255s 00:30:23.822 16:46:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:23.822 16:46:00 -- common/autotest_common.sh@10 -- # set +x 00:30:23.822 ************************************ 00:30:23.822 END TEST bdev_verify_big_io 00:30:23.822 ************************************ 00:30:23.822 16:46:00 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:23.822 16:46:00 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:23.822 16:46:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:23.822 16:46:00 -- common/autotest_common.sh@10 -- # set +x 00:30:23.822 ************************************ 00:30:23.822 START TEST bdev_write_zeroes 00:30:23.822 ************************************ 00:30:23.822 16:46:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:23.822 [2024-07-11 16:46:00.341472] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:23.822 [2024-07-11 16:46:00.341770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142576 ] 00:30:23.822 [2024-07-11 16:46:00.494780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.080 [2024-07-11 16:46:00.651973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.338 Running I/O for 1 seconds... 00:30:25.272 00:30:25.272 Latency(us) 00:30:25.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.272 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:25.272 Nvme0n1p1 : 1.00 24059.29 93.98 0.00 0.00 5309.44 2532.07 20256.58 00:30:25.272 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:25.272 Nvme0n1p2 : 1.01 24045.52 93.93 0.00 0.00 5305.14 2293.76 13822.14 00:30:25.272 =================================================================================================================== 00:30:25.272 Total : 48104.82 187.91 0.00 0.00 5307.29 2293.76 20256.58 00:30:26.206 00:30:26.206 real 0m2.586s 00:30:26.206 user 0m2.271s 00:30:26.206 sys 0m0.216s 00:30:26.206 16:46:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.206 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:30:26.206 ************************************ 00:30:26.206 END TEST bdev_write_zeroes 00:30:26.206 ************************************ 00:30:26.206 16:46:02 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:26.206 16:46:02 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:26.206 16:46:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:26.206 16:46:02 -- common/autotest_common.sh@10 -- # set +x 00:30:26.206 ************************************ 00:30:26.206 START TEST bdev_json_nonenclosed 00:30:26.206 ************************************ 00:30:26.206 16:46:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:26.206 [2024-07-11 16:46:03.002907] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:26.206 [2024-07-11 16:46:03.003594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142627 ] 00:30:26.464 [2024-07-11 16:46:03.182927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.722 [2024-07-11 16:46:03.344844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.722 [2024-07-11 16:46:03.345036] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:26.722 [2024-07-11 16:46:03.345077] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:26.979 00:30:26.979 real 0m0.721s 00:30:26.979 user 0m0.492s 00:30:26.979 sys 0m0.127s 00:30:26.980 16:46:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:26.980 ************************************ 00:30:26.980 END TEST bdev_json_nonenclosed 00:30:26.980 ************************************ 00:30:26.980 16:46:03 -- common/autotest_common.sh@10 -- # set +x 00:30:26.980 16:46:03 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:26.980 16:46:03 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:30:26.980 16:46:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:26.980 16:46:03 -- common/autotest_common.sh@10 -- # set +x 00:30:26.980 ************************************ 00:30:26.980 START TEST bdev_json_nonarray 00:30:26.980 ************************************ 00:30:26.980 16:46:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:26.980 [2024-07-11 16:46:03.748538] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:26.980 [2024-07-11 16:46:03.748935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142659 ] 00:30:27.238 [2024-07-11 16:46:03.902870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.496 [2024-07-11 16:46:04.075508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.496 [2024-07-11 16:46:04.075759] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:27.496 [2024-07-11 16:46:04.075803] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:27.754 00:30:27.754 real 0m0.728s 00:30:27.754 user 0m0.487s 00:30:27.754 sys 0m0.140s 00:30:27.754 16:46:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.754 ************************************ 00:30:27.754 END TEST bdev_json_nonarray 00:30:27.754 ************************************ 00:30:27.754 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:30:27.754 16:46:04 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:30:27.754 16:46:04 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:30:27.754 16:46:04 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:30:27.754 16:46:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:27.754 16:46:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:27.754 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:30:27.754 ************************************ 00:30:27.754 START TEST bdev_gpt_uuid 00:30:27.754 ************************************ 00:30:27.754 16:46:04 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:30:27.754 16:46:04 -- bdev/blockdev.sh@612 -- # local bdev 00:30:27.754 16:46:04 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:30:27.754 16:46:04 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=142697 00:30:27.754 16:46:04 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:27.754 16:46:04 -- bdev/blockdev.sh@47 -- # waitforlisten 142697 00:30:27.754 16:46:04 -- common/autotest_common.sh@819 -- # '[' -z 142697 ']' 00:30:27.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.754 16:46:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.754 16:46:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:27.754 16:46:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.754 16:46:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:27.754 16:46:04 -- common/autotest_common.sh@10 -- # set +x 00:30:27.754 16:46:04 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:27.754 [2024-07-11 16:46:04.555004] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:27.754 [2024-07-11 16:46:04.555468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142697 ] 00:30:28.013 [2024-07-11 16:46:04.722450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.270 [2024-07-11 16:46:04.898405] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:28.270 [2024-07-11 16:46:04.898655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.657 16:46:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:29.657 16:46:06 -- common/autotest_common.sh@852 -- # return 0 00:30:29.657 16:46:06 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:29.657 16:46:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:29.657 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:30:29.657 Some configs were skipped because the RPC state that can call them passed over. 00:30:29.657 16:46:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:29.657 16:46:06 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:30:29.657 16:46:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:29.657 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:30:29.657 16:46:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:29.657 16:46:06 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:29.657 16:46:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:29.657 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:30:29.657 16:46:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:29.657 16:46:06 -- bdev/blockdev.sh@619 -- # bdev='[ 00:30:29.657 { 00:30:29.657 "name": "Nvme0n1p1", 00:30:29.657 "aliases": [ 00:30:29.657 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:29.657 ], 00:30:29.657 "product_name": "GPT Disk", 00:30:29.657 "block_size": 4096, 00:30:29.657 "num_blocks": 655104, 00:30:29.657 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:29.657 "assigned_rate_limits": { 00:30:29.657 "rw_ios_per_sec": 0, 00:30:29.657 "rw_mbytes_per_sec": 0, 00:30:29.657 "r_mbytes_per_sec": 0, 00:30:29.657 "w_mbytes_per_sec": 0 00:30:29.657 }, 00:30:29.657 "claimed": false, 00:30:29.657 "zoned": false, 00:30:29.657 "supported_io_types": { 00:30:29.657 "read": true, 00:30:29.657 "write": true, 00:30:29.657 "unmap": true, 00:30:29.657 "write_zeroes": true, 00:30:29.657 "flush": true, 00:30:29.657 "reset": true, 00:30:29.657 "compare": true, 00:30:29.657 "compare_and_write": false, 00:30:29.657 "abort": true, 00:30:29.657 "nvme_admin": false, 00:30:29.657 "nvme_io": false 00:30:29.657 }, 00:30:29.657 "driver_specific": { 00:30:29.657 "gpt": { 00:30:29.657 "base_bdev": "Nvme0n1", 00:30:29.657 "offset_blocks": 256, 00:30:29.657 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:29.657 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:29.657 "partition_name": "SPDK_TEST_first" 00:30:29.657 } 00:30:29.657 } 00:30:29.657 } 00:30:29.657 ]' 00:30:29.657 16:46:06 -- bdev/blockdev.sh@620 -- # jq -r length 00:30:29.657 16:46:06 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:30:29.657 16:46:06 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:30:29.657 16:46:06 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:29.657 16:46:06 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:29.657 16:46:06 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:29.657 16:46:06 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:29.657 16:46:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:29.657 16:46:06 -- common/autotest_common.sh@10 -- # set +x 00:30:29.657 16:46:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:29.657 16:46:06 -- bdev/blockdev.sh@624 -- # bdev='[ 00:30:29.657 { 00:30:29.657 "name": "Nvme0n1p2", 00:30:29.657 "aliases": [ 00:30:29.657 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:29.657 ], 00:30:29.657 "product_name": "GPT Disk", 00:30:29.657 "block_size": 4096, 00:30:29.657 "num_blocks": 655103, 00:30:29.657 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:29.657 "assigned_rate_limits": { 00:30:29.657 "rw_ios_per_sec": 0, 00:30:29.657 "rw_mbytes_per_sec": 0, 00:30:29.657 "r_mbytes_per_sec": 0, 00:30:29.657 "w_mbytes_per_sec": 0 00:30:29.657 }, 00:30:29.657 "claimed": false, 00:30:29.657 "zoned": false, 00:30:29.657 "supported_io_types": { 00:30:29.657 "read": true, 00:30:29.657 "write": true, 00:30:29.657 "unmap": true, 00:30:29.657 "write_zeroes": true, 00:30:29.657 "flush": true, 00:30:29.657 "reset": true, 00:30:29.657 "compare": true, 00:30:29.657 "compare_and_write": false, 00:30:29.657 "abort": true, 00:30:29.657 "nvme_admin": false, 00:30:29.657 "nvme_io": false 00:30:29.657 }, 00:30:29.657 "driver_specific": { 00:30:29.657 "gpt": { 00:30:29.657 "base_bdev": "Nvme0n1", 00:30:29.657 "offset_blocks": 655360, 00:30:29.657 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:29.657 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:29.657 "partition_name": "SPDK_TEST_second" 00:30:29.657 } 00:30:29.657 } 00:30:29.657 } 00:30:29.657 ]' 00:30:29.657 16:46:06 -- bdev/blockdev.sh@625 -- # jq -r length 00:30:29.918 16:46:06 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:30:29.918 16:46:06 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:30:29.918 16:46:06 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:29.918 16:46:06 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:29.918 16:46:06 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:29.918 16:46:06 -- bdev/blockdev.sh@629 -- # killprocess 142697 00:30:29.918 16:46:06 -- common/autotest_common.sh@926 -- # '[' -z 142697 ']' 00:30:29.918 16:46:06 -- common/autotest_common.sh@930 -- # kill -0 142697 00:30:29.918 16:46:06 -- common/autotest_common.sh@931 -- # uname 00:30:29.918 16:46:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:29.918 16:46:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 142697 00:30:29.918 16:46:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:29.918 16:46:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:29.918 16:46:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 142697' 00:30:29.918 killing process with pid 142697 00:30:29.918 16:46:06 -- common/autotest_common.sh@945 -- # kill 142697 00:30:29.919 16:46:06 -- common/autotest_common.sh@950 -- # wait 142697 00:30:31.821 00:30:31.821 real 0m3.902s 00:30:31.821 user 0m4.350s 00:30:31.821 sys 0m0.426s 00:30:31.821 16:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.821 ************************************ 00:30:31.821 END TEST bdev_gpt_uuid 00:30:31.821 ************************************ 00:30:31.821 16:46:08 -- common/autotest_common.sh@10 -- # set +x 00:30:31.821 16:46:08 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:30:31.821 16:46:08 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:31.821 16:46:08 -- bdev/blockdev.sh@809 -- # cleanup 00:30:31.821 16:46:08 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:31.821 16:46:08 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:31.821 16:46:08 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:30:31.821 16:46:08 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:30:31.821 16:46:08 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:30:31.821 16:46:08 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:32.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:32.080 Waiting for block devices as requested 00:30:32.080 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:32.080 16:46:08 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:30:32.080 16:46:08 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:30:32.338 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:32.338 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:32.338 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:32.338 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:32.338 16:46:08 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:30:32.338 00:30:32.338 real 0m45.833s 00:30:32.338 user 1m6.942s 00:30:32.338 sys 0m5.658s 00:30:32.338 16:46:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:32.338 16:46:08 -- common/autotest_common.sh@10 -- # set +x 00:30:32.338 ************************************ 00:30:32.338 END TEST blockdev_nvme_gpt 00:30:32.338 ************************************ 00:30:32.338 16:46:08 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:32.338 16:46:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:32.338 16:46:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:32.338 16:46:08 -- common/autotest_common.sh@10 -- # set +x 00:30:32.338 ************************************ 00:30:32.338 START TEST nvme 00:30:32.338 ************************************ 00:30:32.338 16:46:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:32.338 * Looking for test storage... 00:30:32.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:32.338 16:46:09 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:32.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:30:32.855 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:33.792 16:46:10 -- nvme/nvme.sh@79 -- # uname 00:30:33.792 16:46:10 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:33.792 16:46:10 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:33.792 16:46:10 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:33.792 16:46:10 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:33.792 16:46:10 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:30:33.792 16:46:10 -- common/autotest_common.sh@1045 -- # echo 0 00:30:33.792 16:46:10 -- common/autotest_common.sh@1047 -- # stubpid=143167 00:30:33.792 Waiting for stub to ready for secondary processes... 00:30:33.792 16:46:10 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:30:33.792 16:46:10 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:33.792 16:46:10 -- common/autotest_common.sh@1046 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:33.792 16:46:10 -- common/autotest_common.sh@1051 -- # [[ -e /proc/143167 ]] 00:30:33.792 16:46:10 -- common/autotest_common.sh@1052 -- # sleep 1s 00:30:33.792 [2024-07-11 16:46:10.563829] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:33.792 [2024-07-11 16:46:10.564044] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.727 16:46:11 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:34.727 16:46:11 -- common/autotest_common.sh@1051 -- # [[ -e /proc/143167 ]] 00:30:34.727 16:46:11 -- common/autotest_common.sh@1052 -- # sleep 1s 00:30:34.985 [2024-07-11 16:46:11.746972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:35.244 [2024-07-11 16:46:11.957960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.244 [2024-07-11 16:46:11.958104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.244 [2024-07-11 16:46:11.958312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.244 [2024-07-11 16:46:11.973429] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:35.244 [2024-07-11 16:46:11.982292] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:35.244 [2024-07-11 16:46:11.983355] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:35.811 16:46:12 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:35.811 16:46:12 -- common/autotest_common.sh@1054 -- # echo done. 00:30:35.811 done. 00:30:35.811 16:46:12 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:35.811 16:46:12 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:30:35.811 16:46:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:35.811 16:46:12 -- common/autotest_common.sh@10 -- # set +x 00:30:35.811 ************************************ 00:30:35.811 START TEST nvme_reset 00:30:35.811 ************************************ 00:30:35.811 16:46:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:36.069 Initializing NVMe Controllers 00:30:36.069 Skipping QEMU NVMe SSD at 0000:00:06.0 00:30:36.069 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:36.069 00:30:36.069 real 0m0.301s 00:30:36.069 user 0m0.117s 00:30:36.069 sys 0m0.100s 00:30:36.069 16:46:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.069 16:46:12 -- common/autotest_common.sh@10 -- # set +x 00:30:36.069 ************************************ 00:30:36.069 END TEST nvme_reset 00:30:36.069 ************************************ 00:30:36.069 16:46:12 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:36.069 16:46:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:36.069 16:46:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:36.069 16:46:12 -- common/autotest_common.sh@10 -- # set +x 00:30:36.338 ************************************ 00:30:36.338 START TEST nvme_identify 00:30:36.338 ************************************ 00:30:36.338 16:46:12 -- common/autotest_common.sh@1104 -- # nvme_identify 00:30:36.338 16:46:12 -- nvme/nvme.sh@12 -- # bdfs=() 00:30:36.338 16:46:12 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:36.338 16:46:12 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:36.338 16:46:12 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:36.338 16:46:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:36.338 16:46:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:36.338 16:46:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:36.338 16:46:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:36.338 16:46:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:36.338 16:46:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:36.338 16:46:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:30:36.338 16:46:12 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:36.597 [2024-07-11 16:46:13.185593] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 143205 terminated unexpected 00:30:36.597 ===================================================== 00:30:36.597 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:36.597 ===================================================== 00:30:36.597 Controller Capabilities/Features 00:30:36.597 ================================ 00:30:36.597 Vendor ID: 1b36 00:30:36.597 Subsystem Vendor ID: 1af4 00:30:36.597 Serial Number: 12340 00:30:36.597 Model Number: QEMU NVMe Ctrl 00:30:36.597 Firmware Version: 8.0.0 00:30:36.597 Recommended Arb Burst: 6 00:30:36.597 IEEE OUI Identifier: 00 54 52 00:30:36.597 Multi-path I/O 00:30:36.597 May have multiple subsystem ports: No 00:30:36.597 May have multiple controllers: No 00:30:36.597 Associated with SR-IOV VF: No 00:30:36.597 Max Data Transfer Size: 524288 00:30:36.597 Max Number of Namespaces: 256 00:30:36.597 Max Number of I/O Queues: 64 00:30:36.597 NVMe Specification Version (VS): 1.4 00:30:36.597 NVMe Specification Version (Identify): 1.4 00:30:36.597 Maximum Queue Entries: 2048 00:30:36.597 Contiguous Queues Required: Yes 00:30:36.597 Arbitration Mechanisms Supported 00:30:36.597 Weighted Round Robin: Not Supported 00:30:36.597 Vendor Specific: Not Supported 00:30:36.597 Reset Timeout: 7500 ms 00:30:36.597 Doorbell Stride: 4 bytes 00:30:36.597 NVM Subsystem Reset: Not Supported 00:30:36.597 Command Sets Supported 00:30:36.597 NVM Command Set: Supported 00:30:36.597 Boot Partition: Not Supported 00:30:36.597 Memory Page Size Minimum: 4096 bytes 00:30:36.597 Memory Page Size Maximum: 65536 bytes 00:30:36.597 Persistent Memory Region: Not Supported 00:30:36.597 Optional Asynchronous Events Supported 00:30:36.597 Namespace Attribute Notices: Supported 00:30:36.597 Firmware Activation Notices: Not Supported 00:30:36.597 ANA Change Notices: Not Supported 00:30:36.597 PLE Aggregate Log Change Notices: Not Supported 00:30:36.597 LBA Status Info Alert Notices: Not Supported 00:30:36.597 EGE Aggregate Log Change Notices: Not Supported 00:30:36.597 Normal NVM Subsystem Shutdown event: Not Supported 00:30:36.597 Zone Descriptor Change Notices: Not Supported 00:30:36.597 Discovery Log Change Notices: Not Supported 00:30:36.597 Controller Attributes 00:30:36.597 128-bit Host Identifier: Not Supported 00:30:36.597 Non-Operational Permissive Mode: Not Supported 00:30:36.597 NVM Sets: Not Supported 00:30:36.597 Read Recovery Levels: Not Supported 00:30:36.597 Endurance Groups: Not Supported 00:30:36.597 Predictable Latency Mode: Not Supported 00:30:36.597 Traffic Based Keep ALive: Not Supported 00:30:36.597 Namespace Granularity: Not Supported 00:30:36.597 SQ Associations: Not Supported 00:30:36.597 UUID List: Not Supported 00:30:36.597 Multi-Domain Subsystem: Not Supported 00:30:36.597 Fixed Capacity Management: Not Supported 00:30:36.597 Variable Capacity Management: Not Supported 00:30:36.597 Delete Endurance Group: Not Supported 00:30:36.597 Delete NVM Set: Not Supported 00:30:36.597 Extended LBA Formats Supported: Supported 00:30:36.597 Flexible Data Placement Supported: Not Supported 00:30:36.597 00:30:36.597 Controller Memory Buffer Support 00:30:36.597 ================================ 00:30:36.597 Supported: No 00:30:36.597 00:30:36.597 Persistent Memory Region Support 00:30:36.597 ================================ 00:30:36.597 Supported: No 00:30:36.597 00:30:36.597 Admin Command Set Attributes 00:30:36.597 ============================ 00:30:36.597 Security Send/Receive: Not Supported 00:30:36.597 Format NVM: Supported 00:30:36.597 Firmware Activate/Download: Not Supported 00:30:36.597 Namespace Management: Supported 00:30:36.597 Device Self-Test: Not Supported 00:30:36.597 Directives: Supported 00:30:36.597 NVMe-MI: Not Supported 00:30:36.597 Virtualization Management: Not Supported 00:30:36.597 Doorbell Buffer Config: Supported 00:30:36.597 Get LBA Status Capability: Not Supported 00:30:36.597 Command & Feature Lockdown Capability: Not Supported 00:30:36.597 Abort Command Limit: 4 00:30:36.597 Async Event Request Limit: 4 00:30:36.597 Number of Firmware Slots: N/A 00:30:36.597 Firmware Slot 1 Read-Only: N/A 00:30:36.597 Firmware Activation Without Reset: N/A 00:30:36.597 Multiple Update Detection Support: N/A 00:30:36.597 Firmware Update Granularity: No Information Provided 00:30:36.597 Per-Namespace SMART Log: Yes 00:30:36.597 Asymmetric Namespace Access Log Page: Not Supported 00:30:36.597 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:36.597 Command Effects Log Page: Supported 00:30:36.597 Get Log Page Extended Data: Supported 00:30:36.597 Telemetry Log Pages: Not Supported 00:30:36.597 Persistent Event Log Pages: Not Supported 00:30:36.597 Supported Log Pages Log Page: May Support 00:30:36.597 Commands Supported & Effects Log Page: Not Supported 00:30:36.597 Feature Identifiers & Effects Log Page:May Support 00:30:36.597 NVMe-MI Commands & Effects Log Page: May Support 00:30:36.597 Data Area 4 for Telemetry Log: Not Supported 00:30:36.597 Error Log Page Entries Supported: 1 00:30:36.597 Keep Alive: Not Supported 00:30:36.597 00:30:36.597 NVM Command Set Attributes 00:30:36.597 ========================== 00:30:36.597 Submission Queue Entry Size 00:30:36.597 Max: 64 00:30:36.597 Min: 64 00:30:36.597 Completion Queue Entry Size 00:30:36.597 Max: 16 00:30:36.597 Min: 16 00:30:36.597 Number of Namespaces: 256 00:30:36.597 Compare Command: Supported 00:30:36.597 Write Uncorrectable Command: Not Supported 00:30:36.597 Dataset Management Command: Supported 00:30:36.597 Write Zeroes Command: Supported 00:30:36.597 Set Features Save Field: Supported 00:30:36.597 Reservations: Not Supported 00:30:36.597 Timestamp: Supported 00:30:36.597 Copy: Supported 00:30:36.597 Volatile Write Cache: Present 00:30:36.597 Atomic Write Unit (Normal): 1 00:30:36.597 Atomic Write Unit (PFail): 1 00:30:36.597 Atomic Compare & Write Unit: 1 00:30:36.597 Fused Compare & Write: Not Supported 00:30:36.597 Scatter-Gather List 00:30:36.597 SGL Command Set: Supported 00:30:36.597 SGL Keyed: Not Supported 00:30:36.597 SGL Bit Bucket Descriptor: Not Supported 00:30:36.597 SGL Metadata Pointer: Not Supported 00:30:36.597 Oversized SGL: Not Supported 00:30:36.597 SGL Metadata Address: Not Supported 00:30:36.597 SGL Offset: Not Supported 00:30:36.597 Transport SGL Data Block: Not Supported 00:30:36.597 Replay Protected Memory Block: Not Supported 00:30:36.597 00:30:36.597 Firmware Slot Information 00:30:36.597 ========================= 00:30:36.597 Active slot: 1 00:30:36.597 Slot 1 Firmware Revision: 1.0 00:30:36.597 00:30:36.597 00:30:36.597 Commands Supported and Effects 00:30:36.597 ============================== 00:30:36.597 Admin Commands 00:30:36.597 -------------- 00:30:36.597 Delete I/O Submission Queue (00h): Supported 00:30:36.597 Create I/O Submission Queue (01h): Supported 00:30:36.597 Get Log Page (02h): Supported 00:30:36.597 Delete I/O Completion Queue (04h): Supported 00:30:36.597 Create I/O Completion Queue (05h): Supported 00:30:36.597 Identify (06h): Supported 00:30:36.597 Abort (08h): Supported 00:30:36.597 Set Features (09h): Supported 00:30:36.597 Get Features (0Ah): Supported 00:30:36.597 Asynchronous Event Request (0Ch): Supported 00:30:36.597 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:36.597 Directive Send (19h): Supported 00:30:36.597 Directive Receive (1Ah): Supported 00:30:36.597 Virtualization Management (1Ch): Supported 00:30:36.597 Doorbell Buffer Config (7Ch): Supported 00:30:36.597 Format NVM (80h): Supported LBA-Change 00:30:36.597 I/O Commands 00:30:36.597 ------------ 00:30:36.597 Flush (00h): Supported LBA-Change 00:30:36.597 Write (01h): Supported LBA-Change 00:30:36.597 Read (02h): Supported 00:30:36.597 Compare (05h): Supported 00:30:36.597 Write Zeroes (08h): Supported LBA-Change 00:30:36.597 Dataset Management (09h): Supported LBA-Change 00:30:36.598 Unknown (0Ch): Supported 00:30:36.598 Unknown (12h): Supported 00:30:36.598 Copy (19h): Supported LBA-Change 00:30:36.598 Unknown (1Dh): Supported LBA-Change 00:30:36.598 00:30:36.598 Error Log 00:30:36.598 ========= 00:30:36.598 00:30:36.598 Arbitration 00:30:36.598 =========== 00:30:36.598 Arbitration Burst: no limit 00:30:36.598 00:30:36.598 Power Management 00:30:36.598 ================ 00:30:36.598 Number of Power States: 1 00:30:36.598 Current Power State: Power State #0 00:30:36.598 Power State #0: 00:30:36.598 Max Power: 25.00 W 00:30:36.598 Non-Operational State: Operational 00:30:36.598 Entry Latency: 16 microseconds 00:30:36.598 Exit Latency: 4 microseconds 00:30:36.598 Relative Read Throughput: 0 00:30:36.598 Relative Read Latency: 0 00:30:36.598 Relative Write Throughput: 0 00:30:36.598 Relative Write Latency: 0 00:30:36.598 Idle Power: Not Reported 00:30:36.598 Active Power: Not Reported 00:30:36.598 Non-Operational Permissive Mode: Not Supported 00:30:36.598 00:30:36.598 Health Information 00:30:36.598 ================== 00:30:36.598 Critical Warnings: 00:30:36.598 Available Spare Space: OK 00:30:36.598 Temperature: OK 00:30:36.598 Device Reliability: OK 00:30:36.598 Read Only: No 00:30:36.598 Volatile Memory Backup: OK 00:30:36.598 Current Temperature: 323 Kelvin (50 Celsius) 00:30:36.598 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:36.598 Available Spare: 0% 00:30:36.598 Available Spare Threshold: 0% 00:30:36.598 Life Percentage Used: 0% 00:30:36.598 Data Units Read: 8369 00:30:36.598 Data Units Written: 4083 00:30:36.598 Host Read Commands: 393571 00:30:36.598 Host Write Commands: 211892 00:30:36.598 Controller Busy Time: 0 minutes 00:30:36.598 Power Cycles: 0 00:30:36.598 Power On Hours: 0 hours 00:30:36.598 Unsafe Shutdowns: 0 00:30:36.598 Unrecoverable Media Errors: 0 00:30:36.598 Lifetime Error Log Entries: 0 00:30:36.598 Warning Temperature Time: 0 minutes 00:30:36.598 Critical Temperature Time: 0 minutes 00:30:36.598 00:30:36.598 Number of Queues 00:30:36.598 ================ 00:30:36.598 Number of I/O Submission Queues: 64 00:30:36.598 Number of I/O Completion Queues: 64 00:30:36.598 00:30:36.598 ZNS Specific Controller Data 00:30:36.598 ============================ 00:30:36.598 Zone Append Size Limit: 0 00:30:36.598 00:30:36.598 00:30:36.598 Active Namespaces 00:30:36.598 ================= 00:30:36.598 Namespace ID:1 00:30:36.598 Error Recovery Timeout: Unlimited 00:30:36.598 Command Set Identifier: NVM (00h) 00:30:36.598 Deallocate: Supported 00:30:36.598 Deallocated/Unwritten Error: Supported 00:30:36.598 Deallocated Read Value: All 0x00 00:30:36.598 Deallocate in Write Zeroes: Not Supported 00:30:36.598 Deallocated Guard Field: 0xFFFF 00:30:36.598 Flush: Supported 00:30:36.598 Reservation: Not Supported 00:30:36.598 Namespace Sharing Capabilities: Private 00:30:36.598 Size (in LBAs): 1310720 (5GiB) 00:30:36.598 Capacity (in LBAs): 1310720 (5GiB) 00:30:36.598 Utilization (in LBAs): 1310720 (5GiB) 00:30:36.598 Thin Provisioning: Not Supported 00:30:36.598 Per-NS Atomic Units: No 00:30:36.598 Maximum Single Source Range Length: 128 00:30:36.598 Maximum Copy Length: 128 00:30:36.598 Maximum Source Range Count: 128 00:30:36.598 NGUID/EUI64 Never Reused: No 00:30:36.598 Namespace Write Protected: No 00:30:36.598 Number of LBA Formats: 8 00:30:36.598 Current LBA Format: LBA Format #04 00:30:36.598 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:36.598 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:36.598 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:36.598 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:36.598 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:36.598 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:36.598 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:36.598 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:36.598 00:30:36.598 16:46:13 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:36.598 16:46:13 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:30:36.856 ===================================================== 00:30:36.856 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:36.856 ===================================================== 00:30:36.856 Controller Capabilities/Features 00:30:36.856 ================================ 00:30:36.856 Vendor ID: 1b36 00:30:36.856 Subsystem Vendor ID: 1af4 00:30:36.856 Serial Number: 12340 00:30:36.856 Model Number: QEMU NVMe Ctrl 00:30:36.856 Firmware Version: 8.0.0 00:30:36.856 Recommended Arb Burst: 6 00:30:36.856 IEEE OUI Identifier: 00 54 52 00:30:36.856 Multi-path I/O 00:30:36.856 May have multiple subsystem ports: No 00:30:36.856 May have multiple controllers: No 00:30:36.856 Associated with SR-IOV VF: No 00:30:36.856 Max Data Transfer Size: 524288 00:30:36.856 Max Number of Namespaces: 256 00:30:36.856 Max Number of I/O Queues: 64 00:30:36.856 NVMe Specification Version (VS): 1.4 00:30:36.856 NVMe Specification Version (Identify): 1.4 00:30:36.856 Maximum Queue Entries: 2048 00:30:36.856 Contiguous Queues Required: Yes 00:30:36.856 Arbitration Mechanisms Supported 00:30:36.856 Weighted Round Robin: Not Supported 00:30:36.856 Vendor Specific: Not Supported 00:30:36.856 Reset Timeout: 7500 ms 00:30:36.856 Doorbell Stride: 4 bytes 00:30:36.856 NVM Subsystem Reset: Not Supported 00:30:36.856 Command Sets Supported 00:30:36.856 NVM Command Set: Supported 00:30:36.856 Boot Partition: Not Supported 00:30:36.856 Memory Page Size Minimum: 4096 bytes 00:30:36.856 Memory Page Size Maximum: 65536 bytes 00:30:36.856 Persistent Memory Region: Not Supported 00:30:36.856 Optional Asynchronous Events Supported 00:30:36.856 Namespace Attribute Notices: Supported 00:30:36.856 Firmware Activation Notices: Not Supported 00:30:36.856 ANA Change Notices: Not Supported 00:30:36.856 PLE Aggregate Log Change Notices: Not Supported 00:30:36.856 LBA Status Info Alert Notices: Not Supported 00:30:36.856 EGE Aggregate Log Change Notices: Not Supported 00:30:36.856 Normal NVM Subsystem Shutdown event: Not Supported 00:30:36.856 Zone Descriptor Change Notices: Not Supported 00:30:36.856 Discovery Log Change Notices: Not Supported 00:30:36.856 Controller Attributes 00:30:36.856 128-bit Host Identifier: Not Supported 00:30:36.856 Non-Operational Permissive Mode: Not Supported 00:30:36.856 NVM Sets: Not Supported 00:30:36.856 Read Recovery Levels: Not Supported 00:30:36.856 Endurance Groups: Not Supported 00:30:36.856 Predictable Latency Mode: Not Supported 00:30:36.856 Traffic Based Keep ALive: Not Supported 00:30:36.856 Namespace Granularity: Not Supported 00:30:36.856 SQ Associations: Not Supported 00:30:36.856 UUID List: Not Supported 00:30:36.856 Multi-Domain Subsystem: Not Supported 00:30:36.856 Fixed Capacity Management: Not Supported 00:30:36.856 Variable Capacity Management: Not Supported 00:30:36.856 Delete Endurance Group: Not Supported 00:30:36.856 Delete NVM Set: Not Supported 00:30:36.856 Extended LBA Formats Supported: Supported 00:30:36.856 Flexible Data Placement Supported: Not Supported 00:30:36.856 00:30:36.856 Controller Memory Buffer Support 00:30:36.856 ================================ 00:30:36.856 Supported: No 00:30:36.856 00:30:36.856 Persistent Memory Region Support 00:30:36.856 ================================ 00:30:36.856 Supported: No 00:30:36.856 00:30:36.856 Admin Command Set Attributes 00:30:36.856 ============================ 00:30:36.856 Security Send/Receive: Not Supported 00:30:36.856 Format NVM: Supported 00:30:36.856 Firmware Activate/Download: Not Supported 00:30:36.856 Namespace Management: Supported 00:30:36.856 Device Self-Test: Not Supported 00:30:36.856 Directives: Supported 00:30:36.856 NVMe-MI: Not Supported 00:30:36.856 Virtualization Management: Not Supported 00:30:36.856 Doorbell Buffer Config: Supported 00:30:36.856 Get LBA Status Capability: Not Supported 00:30:36.856 Command & Feature Lockdown Capability: Not Supported 00:30:36.856 Abort Command Limit: 4 00:30:36.856 Async Event Request Limit: 4 00:30:36.856 Number of Firmware Slots: N/A 00:30:36.856 Firmware Slot 1 Read-Only: N/A 00:30:36.856 Firmware Activation Without Reset: N/A 00:30:36.856 Multiple Update Detection Support: N/A 00:30:36.856 Firmware Update Granularity: No Information Provided 00:30:36.856 Per-Namespace SMART Log: Yes 00:30:36.856 Asymmetric Namespace Access Log Page: Not Supported 00:30:36.856 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:36.856 Command Effects Log Page: Supported 00:30:36.856 Get Log Page Extended Data: Supported 00:30:36.856 Telemetry Log Pages: Not Supported 00:30:36.856 Persistent Event Log Pages: Not Supported 00:30:36.856 Supported Log Pages Log Page: May Support 00:30:36.856 Commands Supported & Effects Log Page: Not Supported 00:30:36.856 Feature Identifiers & Effects Log Page:May Support 00:30:36.856 NVMe-MI Commands & Effects Log Page: May Support 00:30:36.856 Data Area 4 for Telemetry Log: Not Supported 00:30:36.856 Error Log Page Entries Supported: 1 00:30:36.856 Keep Alive: Not Supported 00:30:36.856 00:30:36.856 NVM Command Set Attributes 00:30:36.856 ========================== 00:30:36.856 Submission Queue Entry Size 00:30:36.856 Max: 64 00:30:36.856 Min: 64 00:30:36.856 Completion Queue Entry Size 00:30:36.856 Max: 16 00:30:36.856 Min: 16 00:30:36.856 Number of Namespaces: 256 00:30:36.856 Compare Command: Supported 00:30:36.856 Write Uncorrectable Command: Not Supported 00:30:36.856 Dataset Management Command: Supported 00:30:36.856 Write Zeroes Command: Supported 00:30:36.856 Set Features Save Field: Supported 00:30:36.856 Reservations: Not Supported 00:30:36.856 Timestamp: Supported 00:30:36.856 Copy: Supported 00:30:36.856 Volatile Write Cache: Present 00:30:36.856 Atomic Write Unit (Normal): 1 00:30:36.856 Atomic Write Unit (PFail): 1 00:30:36.856 Atomic Compare & Write Unit: 1 00:30:36.856 Fused Compare & Write: Not Supported 00:30:36.856 Scatter-Gather List 00:30:36.856 SGL Command Set: Supported 00:30:36.856 SGL Keyed: Not Supported 00:30:36.856 SGL Bit Bucket Descriptor: Not Supported 00:30:36.856 SGL Metadata Pointer: Not Supported 00:30:36.856 Oversized SGL: Not Supported 00:30:36.856 SGL Metadata Address: Not Supported 00:30:36.856 SGL Offset: Not Supported 00:30:36.856 Transport SGL Data Block: Not Supported 00:30:36.856 Replay Protected Memory Block: Not Supported 00:30:36.856 00:30:36.856 Firmware Slot Information 00:30:36.856 ========================= 00:30:36.856 Active slot: 1 00:30:36.856 Slot 1 Firmware Revision: 1.0 00:30:36.856 00:30:36.856 00:30:36.856 Commands Supported and Effects 00:30:36.856 ============================== 00:30:36.856 Admin Commands 00:30:36.856 -------------- 00:30:36.856 Delete I/O Submission Queue (00h): Supported 00:30:36.857 Create I/O Submission Queue (01h): Supported 00:30:36.857 Get Log Page (02h): Supported 00:30:36.857 Delete I/O Completion Queue (04h): Supported 00:30:36.857 Create I/O Completion Queue (05h): Supported 00:30:36.857 Identify (06h): Supported 00:30:36.857 Abort (08h): Supported 00:30:36.857 Set Features (09h): Supported 00:30:36.857 Get Features (0Ah): Supported 00:30:36.857 Asynchronous Event Request (0Ch): Supported 00:30:36.857 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:36.857 Directive Send (19h): Supported 00:30:36.857 Directive Receive (1Ah): Supported 00:30:36.857 Virtualization Management (1Ch): Supported 00:30:36.857 Doorbell Buffer Config (7Ch): Supported 00:30:36.857 Format NVM (80h): Supported LBA-Change 00:30:36.857 I/O Commands 00:30:36.857 ------------ 00:30:36.857 Flush (00h): Supported LBA-Change 00:30:36.857 Write (01h): Supported LBA-Change 00:30:36.857 Read (02h): Supported 00:30:36.857 Compare (05h): Supported 00:30:36.857 Write Zeroes (08h): Supported LBA-Change 00:30:36.857 Dataset Management (09h): Supported LBA-Change 00:30:36.857 Unknown (0Ch): Supported 00:30:36.857 Unknown (12h): Supported 00:30:36.857 Copy (19h): Supported LBA-Change 00:30:36.857 Unknown (1Dh): Supported LBA-Change 00:30:36.857 00:30:36.857 Error Log 00:30:36.857 ========= 00:30:36.857 00:30:36.857 Arbitration 00:30:36.857 =========== 00:30:36.857 Arbitration Burst: no limit 00:30:36.857 00:30:36.857 Power Management 00:30:36.857 ================ 00:30:36.857 Number of Power States: 1 00:30:36.857 Current Power State: Power State #0 00:30:36.857 Power State #0: 00:30:36.857 Max Power: 25.00 W 00:30:36.857 Non-Operational State: Operational 00:30:36.857 Entry Latency: 16 microseconds 00:30:36.857 Exit Latency: 4 microseconds 00:30:36.857 Relative Read Throughput: 0 00:30:36.857 Relative Read Latency: 0 00:30:36.857 Relative Write Throughput: 0 00:30:36.857 Relative Write Latency: 0 00:30:36.857 Idle Power: Not Reported 00:30:36.857 Active Power: Not Reported 00:30:36.857 Non-Operational Permissive Mode: Not Supported 00:30:36.857 00:30:36.857 Health Information 00:30:36.857 ================== 00:30:36.857 Critical Warnings: 00:30:36.857 Available Spare Space: OK 00:30:36.857 Temperature: OK 00:30:36.857 Device Reliability: OK 00:30:36.857 Read Only: No 00:30:36.857 Volatile Memory Backup: OK 00:30:36.857 Current Temperature: 323 Kelvin (50 Celsius) 00:30:36.857 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:36.857 Available Spare: 0% 00:30:36.857 Available Spare Threshold: 0% 00:30:36.857 Life Percentage Used: 0% 00:30:36.857 Data Units Read: 8369 00:30:36.857 Data Units Written: 4083 00:30:36.857 Host Read Commands: 393571 00:30:36.857 Host Write Commands: 211892 00:30:36.857 Controller Busy Time: 0 minutes 00:30:36.857 Power Cycles: 0 00:30:36.857 Power On Hours: 0 hours 00:30:36.857 Unsafe Shutdowns: 0 00:30:36.857 Unrecoverable Media Errors: 0 00:30:36.857 Lifetime Error Log Entries: 0 00:30:36.857 Warning Temperature Time: 0 minutes 00:30:36.857 Critical Temperature Time: 0 minutes 00:30:36.857 00:30:36.857 Number of Queues 00:30:36.857 ================ 00:30:36.857 Number of I/O Submission Queues: 64 00:30:36.857 Number of I/O Completion Queues: 64 00:30:36.857 00:30:36.857 ZNS Specific Controller Data 00:30:36.857 ============================ 00:30:36.857 Zone Append Size Limit: 0 00:30:36.857 00:30:36.857 00:30:36.857 Active Namespaces 00:30:36.857 ================= 00:30:36.857 Namespace ID:1 00:30:36.857 Error Recovery Timeout: Unlimited 00:30:36.857 Command Set Identifier: NVM (00h) 00:30:36.857 Deallocate: Supported 00:30:36.857 Deallocated/Unwritten Error: Supported 00:30:36.857 Deallocated Read Value: All 0x00 00:30:36.857 Deallocate in Write Zeroes: Not Supported 00:30:36.857 Deallocated Guard Field: 0xFFFF 00:30:36.857 Flush: Supported 00:30:36.857 Reservation: Not Supported 00:30:36.857 Namespace Sharing Capabilities: Private 00:30:36.857 Size (in LBAs): 1310720 (5GiB) 00:30:36.857 Capacity (in LBAs): 1310720 (5GiB) 00:30:36.857 Utilization (in LBAs): 1310720 (5GiB) 00:30:36.857 Thin Provisioning: Not Supported 00:30:36.857 Per-NS Atomic Units: No 00:30:36.857 Maximum Single Source Range Length: 128 00:30:36.857 Maximum Copy Length: 128 00:30:36.857 Maximum Source Range Count: 128 00:30:36.857 NGUID/EUI64 Never Reused: No 00:30:36.857 Namespace Write Protected: No 00:30:36.857 Number of LBA Formats: 8 00:30:36.857 Current LBA Format: LBA Format #04 00:30:36.857 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:36.857 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:36.857 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:36.857 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:36.857 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:36.857 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:36.857 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:36.857 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:36.857 00:30:36.857 00:30:36.857 real 0m0.681s 00:30:36.857 user 0m0.268s 00:30:36.857 sys 0m0.298s 00:30:36.857 16:46:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:36.857 16:46:13 -- common/autotest_common.sh@10 -- # set +x 00:30:36.857 ************************************ 00:30:36.857 END TEST nvme_identify 00:30:36.857 ************************************ 00:30:36.857 16:46:13 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:36.857 16:46:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:36.857 16:46:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:36.857 16:46:13 -- common/autotest_common.sh@10 -- # set +x 00:30:36.857 ************************************ 00:30:36.857 START TEST nvme_perf 00:30:36.857 ************************************ 00:30:36.857 16:46:13 -- common/autotest_common.sh@1104 -- # nvme_perf 00:30:36.857 16:46:13 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:38.228 Initializing NVMe Controllers 00:30:38.228 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:38.228 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:38.228 Initialization complete. Launching workers. 00:30:38.228 ======================================================== 00:30:38.228 Latency(us) 00:30:38.228 Device Information : IOPS MiB/s Average min max 00:30:38.228 PCIE (0000:00:06.0) NSID 1 from core 0: 57344.00 672.00 2232.80 1208.69 6315.38 00:30:38.228 ======================================================== 00:30:38.228 Total : 57344.00 672.00 2232.80 1208.69 6315.38 00:30:38.228 00:30:38.228 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:38.228 ================================================================================= 00:30:38.228 1.00000% : 1377.745us 00:30:38.228 10.00000% : 1593.716us 00:30:38.228 25.00000% : 1824.582us 00:30:38.228 50.00000% : 2219.287us 00:30:38.228 75.00000% : 2591.651us 00:30:38.228 90.00000% : 2859.753us 00:30:38.228 95.00000% : 3068.276us 00:30:38.228 98.00000% : 3381.062us 00:30:38.228 99.00000% : 3530.007us 00:30:38.228 99.50000% : 3932.160us 00:30:38.228 99.90000% : 5362.036us 00:30:38.228 99.99000% : 6106.764us 00:30:38.228 99.99900% : 6345.076us 00:30:38.228 99.99990% : 6345.076us 00:30:38.228 99.99999% : 6345.076us 00:30:38.228 00:30:38.228 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:38.228 ============================================================================== 00:30:38.228 Range in us Cumulative IO count 00:30:38.228 1206.458 - 1213.905: 0.0017% ( 1) 00:30:38.228 1213.905 - 1221.353: 0.0052% ( 2) 00:30:38.228 1228.800 - 1236.247: 0.0070% ( 1) 00:30:38.228 1236.247 - 1243.695: 0.0122% ( 3) 00:30:38.228 1243.695 - 1251.142: 0.0140% ( 1) 00:30:38.228 1258.589 - 1266.036: 0.0174% ( 2) 00:30:38.228 1266.036 - 1273.484: 0.0244% ( 4) 00:30:38.228 1273.484 - 1280.931: 0.0314% ( 4) 00:30:38.228 1280.931 - 1288.378: 0.0471% ( 9) 00:30:38.228 1288.378 - 1295.825: 0.0732% ( 15) 00:30:38.228 1295.825 - 1303.273: 0.0924% ( 11) 00:30:38.228 1303.273 - 1310.720: 0.1221% ( 17) 00:30:38.228 1310.720 - 1318.167: 0.1674% ( 26) 00:30:38.228 1318.167 - 1325.615: 0.2197% ( 30) 00:30:38.228 1325.615 - 1333.062: 0.2703% ( 29) 00:30:38.228 1333.062 - 1340.509: 0.3575% ( 50) 00:30:38.228 1340.509 - 1347.956: 0.4918% ( 77) 00:30:38.228 1347.956 - 1355.404: 0.5947% ( 59) 00:30:38.228 1355.404 - 1362.851: 0.7289% ( 77) 00:30:38.228 1362.851 - 1370.298: 0.8702% ( 81) 00:30:38.228 1370.298 - 1377.745: 1.0219% ( 87) 00:30:38.228 1377.745 - 1385.193: 1.1806% ( 91) 00:30:38.228 1385.193 - 1392.640: 1.3567% ( 101) 00:30:38.228 1392.640 - 1400.087: 1.5294% ( 99) 00:30:38.228 1400.087 - 1407.535: 1.7543% ( 129) 00:30:38.228 1407.535 - 1414.982: 1.9618% ( 119) 00:30:38.228 1414.982 - 1422.429: 2.1955% ( 134) 00:30:38.228 1422.429 - 1429.876: 2.4414% ( 141) 00:30:38.228 1429.876 - 1437.324: 2.6855% ( 140) 00:30:38.228 1437.324 - 1444.771: 2.9349% ( 143) 00:30:38.228 1444.771 - 1452.218: 3.2418% ( 176) 00:30:38.228 1452.218 - 1459.665: 3.5226% ( 161) 00:30:38.228 1459.665 - 1467.113: 3.8295% ( 176) 00:30:38.229 1467.113 - 1474.560: 4.1399% ( 178) 00:30:38.229 1474.560 - 1482.007: 4.4399% ( 172) 00:30:38.229 1482.007 - 1489.455: 4.7729% ( 191) 00:30:38.229 1489.455 - 1496.902: 5.0956% ( 185) 00:30:38.229 1496.902 - 1504.349: 5.4618% ( 210) 00:30:38.229 1504.349 - 1511.796: 5.8123% ( 201) 00:30:38.229 1511.796 - 1519.244: 6.1802% ( 211) 00:30:38.229 1519.244 - 1526.691: 6.5395% ( 206) 00:30:38.229 1526.691 - 1534.138: 6.9231% ( 220) 00:30:38.229 1534.138 - 1541.585: 7.3417% ( 240) 00:30:38.229 1541.585 - 1549.033: 7.7480% ( 233) 00:30:38.229 1549.033 - 1556.480: 8.1560% ( 234) 00:30:38.229 1556.480 - 1563.927: 8.6199% ( 266) 00:30:38.229 1563.927 - 1571.375: 9.0559% ( 250) 00:30:38.229 1571.375 - 1578.822: 9.4866% ( 247) 00:30:38.229 1578.822 - 1586.269: 9.9348% ( 257) 00:30:38.229 1586.269 - 1593.716: 10.3707% ( 250) 00:30:38.229 1593.716 - 1601.164: 10.8311% ( 264) 00:30:38.229 1601.164 - 1608.611: 11.3194% ( 280) 00:30:38.229 1608.611 - 1616.058: 11.7606% ( 253) 00:30:38.229 1616.058 - 1623.505: 12.2367% ( 273) 00:30:38.229 1623.505 - 1630.953: 12.7250% ( 280) 00:30:38.229 1630.953 - 1638.400: 13.1644% ( 252) 00:30:38.229 1638.400 - 1645.847: 13.6684% ( 289) 00:30:38.229 1645.847 - 1653.295: 14.1427% ( 272) 00:30:38.229 1653.295 - 1660.742: 14.6118% ( 269) 00:30:38.229 1660.742 - 1668.189: 15.0879% ( 273) 00:30:38.229 1668.189 - 1675.636: 15.5919% ( 289) 00:30:38.229 1675.636 - 1683.084: 16.0575% ( 267) 00:30:38.229 1683.084 - 1690.531: 16.5388% ( 276) 00:30:38.229 1690.531 - 1697.978: 17.0428% ( 289) 00:30:38.229 1697.978 - 1705.425: 17.5258% ( 277) 00:30:38.229 1705.425 - 1712.873: 18.0246% ( 286) 00:30:38.229 1712.873 - 1720.320: 18.4989% ( 272) 00:30:38.229 1720.320 - 1727.767: 18.9732% ( 272) 00:30:38.229 1727.767 - 1735.215: 19.4475% ( 272) 00:30:38.229 1735.215 - 1742.662: 19.9463% ( 286) 00:30:38.229 1742.662 - 1750.109: 20.4398% ( 283) 00:30:38.229 1750.109 - 1757.556: 20.9176% ( 274) 00:30:38.229 1757.556 - 1765.004: 21.4042% ( 279) 00:30:38.229 1765.004 - 1772.451: 21.8959% ( 282) 00:30:38.229 1772.451 - 1779.898: 22.3755% ( 275) 00:30:38.229 1779.898 - 1787.345: 22.8760% ( 287) 00:30:38.229 1787.345 - 1794.793: 23.3468% ( 270) 00:30:38.229 1794.793 - 1802.240: 23.8316% ( 278) 00:30:38.229 1802.240 - 1809.687: 24.3199% ( 280) 00:30:38.229 1809.687 - 1817.135: 24.7960% ( 273) 00:30:38.229 1817.135 - 1824.582: 25.2686% ( 271) 00:30:38.229 1824.582 - 1832.029: 25.7586% ( 281) 00:30:38.229 1832.029 - 1839.476: 26.2364% ( 274) 00:30:38.229 1839.476 - 1846.924: 26.6933% ( 262) 00:30:38.229 1846.924 - 1854.371: 27.2042% ( 293) 00:30:38.229 1854.371 - 1861.818: 27.6751% ( 270) 00:30:38.229 1861.818 - 1869.265: 28.1459% ( 270) 00:30:38.229 1869.265 - 1876.713: 28.6394% ( 283) 00:30:38.229 1876.713 - 1884.160: 29.1260% ( 279) 00:30:38.229 1884.160 - 1891.607: 29.6055% ( 275) 00:30:38.229 1891.607 - 1899.055: 30.1130% ( 291) 00:30:38.229 1899.055 - 1906.502: 30.5751% ( 265) 00:30:38.229 1906.502 - 1921.396: 31.5412% ( 554) 00:30:38.229 1921.396 - 1936.291: 32.5003% ( 550) 00:30:38.229 1936.291 - 1951.185: 33.4560% ( 548) 00:30:38.229 1951.185 - 1966.080: 34.4169% ( 551) 00:30:38.229 1966.080 - 1980.975: 35.3812% ( 553) 00:30:38.229 1980.975 - 1995.869: 36.3438% ( 552) 00:30:38.229 1995.869 - 2010.764: 37.3169% ( 558) 00:30:38.229 2010.764 - 2025.658: 38.2935% ( 560) 00:30:38.229 2025.658 - 2040.553: 39.2561% ( 552) 00:30:38.229 2040.553 - 2055.447: 40.2204% ( 553) 00:30:38.229 2055.447 - 2070.342: 41.1935% ( 558) 00:30:38.229 2070.342 - 2085.236: 42.1735% ( 562) 00:30:38.229 2085.236 - 2100.131: 43.1170% ( 541) 00:30:38.229 2100.131 - 2115.025: 44.0813% ( 553) 00:30:38.229 2115.025 - 2129.920: 45.0666% ( 565) 00:30:38.229 2129.920 - 2144.815: 46.0031% ( 537) 00:30:38.229 2144.815 - 2159.709: 46.9378% ( 536) 00:30:38.229 2159.709 - 2174.604: 47.9248% ( 566) 00:30:38.229 2174.604 - 2189.498: 48.8787% ( 547) 00:30:38.229 2189.498 - 2204.393: 49.8308% ( 546) 00:30:38.229 2204.393 - 2219.287: 50.8283% ( 572) 00:30:38.229 2219.287 - 2234.182: 51.8032% ( 559) 00:30:38.229 2234.182 - 2249.076: 52.7448% ( 540) 00:30:38.229 2249.076 - 2263.971: 53.7284% ( 564) 00:30:38.229 2263.971 - 2278.865: 54.7102% ( 563) 00:30:38.229 2278.865 - 2293.760: 55.6588% ( 544) 00:30:38.229 2293.760 - 2308.655: 56.6180% ( 550) 00:30:38.229 2308.655 - 2323.549: 57.6102% ( 569) 00:30:38.229 2323.549 - 2338.444: 58.5693% ( 550) 00:30:38.229 2338.444 - 2353.338: 59.5302% ( 551) 00:30:38.229 2353.338 - 2368.233: 60.4998% ( 556) 00:30:38.229 2368.233 - 2383.127: 61.4816% ( 563) 00:30:38.229 2383.127 - 2398.022: 62.4512% ( 556) 00:30:38.229 2398.022 - 2412.916: 63.4016% ( 545) 00:30:38.229 2412.916 - 2427.811: 64.3677% ( 554) 00:30:38.229 2427.811 - 2442.705: 65.3355% ( 555) 00:30:38.229 2442.705 - 2457.600: 66.3435% ( 578) 00:30:38.229 2457.600 - 2472.495: 67.3043% ( 551) 00:30:38.229 2472.495 - 2487.389: 68.2757% ( 557) 00:30:38.229 2487.389 - 2502.284: 69.2383% ( 552) 00:30:38.229 2502.284 - 2517.178: 70.2218% ( 564) 00:30:38.229 2517.178 - 2532.073: 71.2071% ( 565) 00:30:38.229 2532.073 - 2546.967: 72.1906% ( 564) 00:30:38.229 2546.967 - 2561.862: 73.1672% ( 560) 00:30:38.229 2561.862 - 2576.756: 74.1612% ( 570) 00:30:38.229 2576.756 - 2591.651: 75.1273% ( 554) 00:30:38.229 2591.651 - 2606.545: 76.0864% ( 550) 00:30:38.229 2606.545 - 2621.440: 77.0874% ( 574) 00:30:38.229 2621.440 - 2636.335: 78.0239% ( 537) 00:30:38.229 2636.335 - 2651.229: 78.9847% ( 551) 00:30:38.229 2651.229 - 2666.124: 79.9561% ( 557) 00:30:38.229 2666.124 - 2681.018: 80.8820% ( 531) 00:30:38.229 2681.018 - 2695.913: 81.8237% ( 540) 00:30:38.229 2695.913 - 2710.807: 82.7462% ( 529) 00:30:38.229 2710.807 - 2725.702: 83.6548% ( 521) 00:30:38.229 2725.702 - 2740.596: 84.5110% ( 491) 00:30:38.229 2740.596 - 2755.491: 85.3638% ( 489) 00:30:38.229 2755.491 - 2770.385: 86.1799% ( 468) 00:30:38.229 2770.385 - 2785.280: 86.9611% ( 448) 00:30:38.229 2785.280 - 2800.175: 87.7197% ( 435) 00:30:38.229 2800.175 - 2815.069: 88.4312% ( 408) 00:30:38.229 2815.069 - 2829.964: 89.0834% ( 374) 00:30:38.229 2829.964 - 2844.858: 89.7252% ( 368) 00:30:38.229 2844.858 - 2859.753: 90.3495% ( 358) 00:30:38.229 2859.753 - 2874.647: 90.8918% ( 311) 00:30:38.229 2874.647 - 2889.542: 91.4062% ( 295) 00:30:38.229 2889.542 - 2904.436: 91.9050% ( 286) 00:30:38.229 2904.436 - 2919.331: 92.3741% ( 269) 00:30:38.229 2919.331 - 2934.225: 92.7752% ( 230) 00:30:38.229 2934.225 - 2949.120: 93.1309% ( 204) 00:30:38.229 2949.120 - 2964.015: 93.4658% ( 192) 00:30:38.229 2964.015 - 2978.909: 93.7692% ( 174) 00:30:38.229 2978.909 - 2993.804: 94.0587% ( 166) 00:30:38.229 2993.804 - 3008.698: 94.3133% ( 146) 00:30:38.229 3008.698 - 3023.593: 94.5539% ( 138) 00:30:38.229 3023.593 - 3038.487: 94.7719% ( 125) 00:30:38.229 3038.487 - 3053.382: 94.9515% ( 103) 00:30:38.229 3053.382 - 3068.276: 95.1468% ( 112) 00:30:38.229 3068.276 - 3083.171: 95.3212% ( 100) 00:30:38.229 3083.171 - 3098.065: 95.4973% ( 101) 00:30:38.229 3098.065 - 3112.960: 95.6595% ( 93) 00:30:38.229 3112.960 - 3127.855: 95.8165% ( 90) 00:30:38.229 3127.855 - 3142.749: 95.9560% ( 80) 00:30:38.229 3142.749 - 3157.644: 96.0955% ( 80) 00:30:38.229 3157.644 - 3172.538: 96.2367% ( 81) 00:30:38.229 3172.538 - 3187.433: 96.3728% ( 78) 00:30:38.229 3187.433 - 3202.327: 96.5088% ( 78) 00:30:38.229 3202.327 - 3217.222: 96.6500% ( 81) 00:30:38.229 3217.222 - 3232.116: 96.7896% ( 80) 00:30:38.229 3232.116 - 3247.011: 96.9238% ( 77) 00:30:38.229 3247.011 - 3261.905: 97.0598% ( 78) 00:30:38.229 3261.905 - 3276.800: 97.1854% ( 72) 00:30:38.229 3276.800 - 3291.695: 97.3162% ( 75) 00:30:38.229 3291.695 - 3306.589: 97.4452% ( 74) 00:30:38.229 3306.589 - 3321.484: 97.5673% ( 70) 00:30:38.229 3321.484 - 3336.378: 97.6998% ( 76) 00:30:38.229 3336.378 - 3351.273: 97.8306% ( 75) 00:30:38.229 3351.273 - 3366.167: 97.9632% ( 76) 00:30:38.229 3366.167 - 3381.062: 98.0905% ( 73) 00:30:38.229 3381.062 - 3395.956: 98.2091% ( 68) 00:30:38.229 3395.956 - 3410.851: 98.3276% ( 68) 00:30:38.229 3410.851 - 3425.745: 98.4375% ( 63) 00:30:38.229 3425.745 - 3440.640: 98.5491% ( 64) 00:30:38.229 3440.640 - 3455.535: 98.6537% ( 60) 00:30:38.229 3455.535 - 3470.429: 98.7479% ( 54) 00:30:38.229 3470.429 - 3485.324: 98.8386% ( 52) 00:30:38.229 3485.324 - 3500.218: 98.9205% ( 47) 00:30:38.229 3500.218 - 3515.113: 98.9886% ( 39) 00:30:38.229 3515.113 - 3530.007: 99.0531% ( 37) 00:30:38.229 3530.007 - 3544.902: 99.1054% ( 30) 00:30:38.229 3544.902 - 3559.796: 99.1560% ( 29) 00:30:38.229 3559.796 - 3574.691: 99.1943% ( 22) 00:30:38.229 3574.691 - 3589.585: 99.2257% ( 18) 00:30:38.229 3589.585 - 3604.480: 99.2484% ( 13) 00:30:38.229 3604.480 - 3619.375: 99.2676% ( 11) 00:30:38.229 3619.375 - 3634.269: 99.2850% ( 10) 00:30:38.229 3634.269 - 3649.164: 99.2990% ( 8) 00:30:38.229 3649.164 - 3664.058: 99.3129% ( 8) 00:30:38.229 3664.058 - 3678.953: 99.3286% ( 9) 00:30:38.229 3678.953 - 3693.847: 99.3408% ( 7) 00:30:38.229 3693.847 - 3708.742: 99.3513% ( 6) 00:30:38.229 3708.742 - 3723.636: 99.3687% ( 10) 00:30:38.229 3723.636 - 3738.531: 99.3792% ( 6) 00:30:38.229 3738.531 - 3753.425: 99.3914% ( 7) 00:30:38.229 3753.425 - 3768.320: 99.4088% ( 10) 00:30:38.229 3768.320 - 3783.215: 99.4210% ( 7) 00:30:38.229 3783.215 - 3798.109: 99.4315% ( 6) 00:30:38.229 3798.109 - 3813.004: 99.4437% ( 7) 00:30:38.229 3813.004 - 3842.793: 99.4594% ( 9) 00:30:38.229 3842.793 - 3872.582: 99.4768% ( 10) 00:30:38.229 3872.582 - 3902.371: 99.4925% ( 9) 00:30:38.229 3902.371 - 3932.160: 99.5065% ( 8) 00:30:38.229 3932.160 - 3961.949: 99.5204% ( 8) 00:30:38.229 3961.949 - 3991.738: 99.5361% ( 9) 00:30:38.229 3991.738 - 4021.527: 99.5518% ( 9) 00:30:38.229 4021.527 - 4051.316: 99.5658% ( 8) 00:30:38.229 4051.316 - 4081.105: 99.5780% ( 7) 00:30:38.229 4081.105 - 4110.895: 99.5902% ( 7) 00:30:38.229 4110.895 - 4140.684: 99.6007% ( 6) 00:30:38.229 4140.684 - 4170.473: 99.6094% ( 5) 00:30:38.229 4170.473 - 4200.262: 99.6198% ( 6) 00:30:38.229 4200.262 - 4230.051: 99.6320% ( 7) 00:30:38.229 4230.051 - 4259.840: 99.6460% ( 8) 00:30:38.229 4259.840 - 4289.629: 99.6565% ( 6) 00:30:38.229 4289.629 - 4319.418: 99.6704% ( 8) 00:30:38.229 4319.418 - 4349.207: 99.6826% ( 7) 00:30:38.229 4349.207 - 4378.996: 99.6931% ( 6) 00:30:38.229 4378.996 - 4408.785: 99.7018% ( 5) 00:30:38.229 4408.785 - 4438.575: 99.7123% ( 6) 00:30:38.229 4438.575 - 4468.364: 99.7210% ( 5) 00:30:38.229 4468.364 - 4498.153: 99.7262% ( 3) 00:30:38.229 4498.153 - 4527.942: 99.7332% ( 4) 00:30:38.229 4527.942 - 4557.731: 99.7402% ( 4) 00:30:38.229 4557.731 - 4587.520: 99.7454% ( 3) 00:30:38.229 4587.520 - 4617.309: 99.7524% ( 4) 00:30:38.229 4617.309 - 4647.098: 99.7593% ( 4) 00:30:38.229 4647.098 - 4676.887: 99.7646% ( 3) 00:30:38.229 4676.887 - 4706.676: 99.7716% ( 4) 00:30:38.229 4706.676 - 4736.465: 99.7785% ( 4) 00:30:38.229 4736.465 - 4766.255: 99.7820% ( 2) 00:30:38.229 4766.255 - 4796.044: 99.7890% ( 4) 00:30:38.229 4796.044 - 4825.833: 99.7960% ( 4) 00:30:38.229 4825.833 - 4855.622: 99.8012% ( 3) 00:30:38.229 4855.622 - 4885.411: 99.8082% ( 4) 00:30:38.229 4885.411 - 4915.200: 99.8152% ( 4) 00:30:38.229 4915.200 - 4944.989: 99.8204% ( 3) 00:30:38.229 4944.989 - 4974.778: 99.8274% ( 4) 00:30:38.229 4974.778 - 5004.567: 99.8326% ( 3) 00:30:38.229 5004.567 - 5034.356: 99.8396% ( 4) 00:30:38.229 5034.356 - 5064.145: 99.8448% ( 3) 00:30:38.229 5064.145 - 5093.935: 99.8500% ( 3) 00:30:38.229 5093.935 - 5123.724: 99.8535% ( 2) 00:30:38.229 5123.724 - 5153.513: 99.8605% ( 4) 00:30:38.229 5153.513 - 5183.302: 99.8640% ( 2) 00:30:38.229 5183.302 - 5213.091: 99.8710% ( 4) 00:30:38.229 5213.091 - 5242.880: 99.8762% ( 3) 00:30:38.229 5242.880 - 5272.669: 99.8832% ( 4) 00:30:38.229 5272.669 - 5302.458: 99.8884% ( 3) 00:30:38.229 5302.458 - 5332.247: 99.8936% ( 3) 00:30:38.229 5332.247 - 5362.036: 99.9006% ( 4) 00:30:38.229 5362.036 - 5391.825: 99.9041% ( 2) 00:30:38.229 5391.825 - 5421.615: 99.9111% ( 4) 00:30:38.229 5421.615 - 5451.404: 99.9163% ( 3) 00:30:38.229 5451.404 - 5481.193: 99.9233% ( 4) 00:30:38.229 5481.193 - 5510.982: 99.9285% ( 3) 00:30:38.229 5510.982 - 5540.771: 99.9355% ( 4) 00:30:38.229 5540.771 - 5570.560: 99.9407% ( 3) 00:30:38.229 5570.560 - 5600.349: 99.9477% ( 4) 00:30:38.229 5600.349 - 5630.138: 99.9529% ( 3) 00:30:38.229 5630.138 - 5659.927: 99.9581% ( 3) 00:30:38.229 5659.927 - 5689.716: 99.9616% ( 2) 00:30:38.229 5689.716 - 5719.505: 99.9686% ( 4) 00:30:38.229 5719.505 - 5749.295: 99.9721% ( 2) 00:30:38.229 5749.295 - 5779.084: 99.9738% ( 1) 00:30:38.229 5779.084 - 5808.873: 99.9756% ( 1) 00:30:38.229 5808.873 - 5838.662: 99.9773% ( 1) 00:30:38.229 5868.451 - 5898.240: 99.9791% ( 1) 00:30:38.229 5898.240 - 5928.029: 99.9808% ( 1) 00:30:38.229 5928.029 - 5957.818: 99.9826% ( 1) 00:30:38.230 5957.818 - 5987.607: 99.9843% ( 1) 00:30:38.230 5987.607 - 6017.396: 99.9860% ( 1) 00:30:38.230 6017.396 - 6047.185: 99.9878% ( 1) 00:30:38.230 6047.185 - 6076.975: 99.9895% ( 1) 00:30:38.230 6076.975 - 6106.764: 99.9913% ( 1) 00:30:38.230 6106.764 - 6136.553: 99.9930% ( 1) 00:30:38.230 6136.553 - 6166.342: 99.9948% ( 1) 00:30:38.230 6166.342 - 6196.131: 99.9965% ( 1) 00:30:38.230 6225.920 - 6255.709: 99.9983% ( 1) 00:30:38.230 6315.287 - 6345.076: 100.0000% ( 1) 00:30:38.230 00:30:38.230 16:46:14 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:39.602 Initializing NVMe Controllers 00:30:39.602 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:39.602 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:39.602 Initialization complete. Launching workers. 00:30:39.602 ======================================================== 00:30:39.602 Latency(us) 00:30:39.602 Device Information : IOPS MiB/s Average min max 00:30:39.602 PCIE (0000:00:06.0) NSID 1 from core 0: 49726.95 582.74 2575.14 1323.84 10565.78 00:30:39.602 ======================================================== 00:30:39.602 Total : 49726.95 582.74 2575.14 1323.84 10565.78 00:30:39.602 00:30:39.602 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:39.602 ================================================================================= 00:30:39.602 1.00000% : 1787.345us 00:30:39.602 10.00000% : 2010.764us 00:30:39.602 25.00000% : 2189.498us 00:30:39.602 50.00000% : 2427.811us 00:30:39.602 75.00000% : 2889.542us 00:30:39.602 90.00000% : 3336.378us 00:30:39.602 95.00000% : 3664.058us 00:30:39.602 98.00000% : 4051.316us 00:30:39.602 99.00000% : 4259.840us 00:30:39.602 99.50000% : 4468.364us 00:30:39.602 99.90000% : 5540.771us 00:30:39.602 99.99000% : 10545.338us 00:30:39.602 99.99900% : 10604.916us 00:30:39.602 99.99990% : 10604.916us 00:30:39.602 99.99999% : 10604.916us 00:30:39.602 00:30:39.602 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:39.602 ============================================================================== 00:30:39.602 Range in us Cumulative IO count 00:30:39.602 1318.167 - 1325.615: 0.0020% ( 1) 00:30:39.602 1355.404 - 1362.851: 0.0040% ( 1) 00:30:39.602 1377.745 - 1385.193: 0.0060% ( 1) 00:30:39.602 1385.193 - 1392.640: 0.0080% ( 1) 00:30:39.602 1392.640 - 1400.087: 0.0101% ( 1) 00:30:39.602 1400.087 - 1407.535: 0.0121% ( 1) 00:30:39.602 1422.429 - 1429.876: 0.0161% ( 2) 00:30:39.602 1437.324 - 1444.771: 0.0201% ( 2) 00:30:39.602 1444.771 - 1452.218: 0.0241% ( 2) 00:30:39.602 1452.218 - 1459.665: 0.0261% ( 1) 00:30:39.602 1459.665 - 1467.113: 0.0302% ( 2) 00:30:39.602 1474.560 - 1482.007: 0.0342% ( 2) 00:30:39.602 1482.007 - 1489.455: 0.0402% ( 3) 00:30:39.602 1489.455 - 1496.902: 0.0463% ( 3) 00:30:39.602 1496.902 - 1504.349: 0.0483% ( 1) 00:30:39.602 1504.349 - 1511.796: 0.0563% ( 4) 00:30:39.602 1511.796 - 1519.244: 0.0623% ( 3) 00:30:39.602 1519.244 - 1526.691: 0.0644% ( 1) 00:30:39.602 1526.691 - 1534.138: 0.0664% ( 1) 00:30:39.602 1534.138 - 1541.585: 0.0744% ( 4) 00:30:39.602 1541.585 - 1549.033: 0.0825% ( 4) 00:30:39.602 1556.480 - 1563.927: 0.0925% ( 5) 00:30:39.602 1563.927 - 1571.375: 0.0985% ( 3) 00:30:39.602 1571.375 - 1578.822: 0.1086% ( 5) 00:30:39.602 1578.822 - 1586.269: 0.1126% ( 2) 00:30:39.602 1586.269 - 1593.716: 0.1247% ( 6) 00:30:39.602 1593.716 - 1601.164: 0.1347% ( 5) 00:30:39.602 1601.164 - 1608.611: 0.1468% ( 6) 00:30:39.602 1608.611 - 1616.058: 0.1508% ( 2) 00:30:39.602 1616.058 - 1623.505: 0.1609% ( 5) 00:30:39.602 1623.505 - 1630.953: 0.1729% ( 6) 00:30:39.602 1630.953 - 1638.400: 0.1870% ( 7) 00:30:39.602 1638.400 - 1645.847: 0.1991% ( 6) 00:30:39.602 1645.847 - 1653.295: 0.2112% ( 6) 00:30:39.602 1653.295 - 1660.742: 0.2353% ( 12) 00:30:39.602 1660.742 - 1668.189: 0.2453% ( 5) 00:30:39.602 1668.189 - 1675.636: 0.2634% ( 9) 00:30:39.602 1675.636 - 1683.084: 0.2835% ( 10) 00:30:39.602 1683.084 - 1690.531: 0.3238% ( 20) 00:30:39.602 1690.531 - 1697.978: 0.3499% ( 13) 00:30:39.602 1697.978 - 1705.425: 0.3761% ( 13) 00:30:39.602 1705.425 - 1712.873: 0.4082% ( 16) 00:30:39.602 1712.873 - 1720.320: 0.4585% ( 25) 00:30:39.602 1720.320 - 1727.767: 0.5007% ( 21) 00:30:39.602 1727.767 - 1735.215: 0.5510% ( 25) 00:30:39.602 1735.215 - 1742.662: 0.6033% ( 26) 00:30:39.602 1742.662 - 1750.109: 0.6676% ( 32) 00:30:39.602 1750.109 - 1757.556: 0.7421% ( 37) 00:30:39.602 1757.556 - 1765.004: 0.8024% ( 30) 00:30:39.602 1765.004 - 1772.451: 0.8889% ( 43) 00:30:39.602 1772.451 - 1779.898: 0.9552% ( 33) 00:30:39.602 1779.898 - 1787.345: 1.0598% ( 52) 00:30:39.602 1787.345 - 1794.793: 1.1483% ( 44) 00:30:39.602 1794.793 - 1802.240: 1.2649% ( 58) 00:30:39.602 1802.240 - 1809.687: 1.3775% ( 56) 00:30:39.602 1809.687 - 1817.135: 1.5102% ( 66) 00:30:39.602 1817.135 - 1824.582: 1.6430% ( 66) 00:30:39.602 1824.582 - 1832.029: 1.7837% ( 70) 00:30:39.602 1832.029 - 1839.476: 1.9587% ( 87) 00:30:39.602 1839.476 - 1846.924: 2.1377% ( 89) 00:30:39.602 1846.924 - 1854.371: 2.3508% ( 106) 00:30:39.602 1854.371 - 1861.818: 2.5479% ( 98) 00:30:39.602 1861.818 - 1869.265: 2.8576% ( 154) 00:30:39.602 1869.265 - 1876.713: 3.0869% ( 114) 00:30:39.602 1876.713 - 1884.160: 3.3463% ( 129) 00:30:39.602 1884.160 - 1891.607: 3.6178% ( 135) 00:30:39.602 1891.607 - 1899.055: 3.8973% ( 139) 00:30:39.602 1899.055 - 1906.502: 4.1828% ( 142) 00:30:39.602 1906.502 - 1921.396: 4.7540% ( 284) 00:30:39.602 1921.396 - 1936.291: 5.4618% ( 352) 00:30:39.602 1936.291 - 1951.185: 6.2582% ( 396) 00:30:39.602 1951.185 - 1966.080: 7.0988% ( 418) 00:30:39.602 1966.080 - 1980.975: 8.1525% ( 524) 00:30:39.602 1980.975 - 1995.869: 9.1741% ( 508) 00:30:39.602 1995.869 - 2010.764: 10.4491% ( 634) 00:30:39.602 2010.764 - 2025.658: 11.5129% ( 529) 00:30:39.602 2025.658 - 2040.553: 12.7697% ( 625) 00:30:39.602 2040.553 - 2055.447: 13.8536% ( 539) 00:30:39.602 2055.447 - 2070.342: 14.9717% ( 556) 00:30:39.602 2070.342 - 2085.236: 16.1803% ( 601) 00:30:39.602 2085.236 - 2100.131: 17.2964% ( 555) 00:30:39.602 2100.131 - 2115.025: 18.5432% ( 620) 00:30:39.602 2115.025 - 2129.920: 19.7619% ( 606) 00:30:39.602 2129.920 - 2144.815: 21.0851% ( 658) 00:30:39.602 2144.815 - 2159.709: 22.4948% ( 701) 00:30:39.602 2159.709 - 2174.604: 23.8904% ( 694) 00:30:39.602 2174.604 - 2189.498: 25.2901% ( 696) 00:30:39.602 2189.498 - 2204.393: 26.5671% ( 635) 00:30:39.602 2204.393 - 2219.287: 27.8199% ( 623) 00:30:39.602 2219.287 - 2234.182: 29.3764% ( 774) 00:30:39.602 2234.182 - 2249.076: 30.8384% ( 727) 00:30:39.602 2249.076 - 2263.971: 32.2642% ( 709) 00:30:39.602 2263.971 - 2278.865: 34.0620% ( 894) 00:30:39.602 2278.865 - 2293.760: 35.7934% ( 861) 00:30:39.603 2293.760 - 2308.655: 37.8627% ( 1029) 00:30:39.603 2308.655 - 2323.549: 39.6646% ( 896) 00:30:39.603 2323.549 - 2338.444: 41.3276% ( 827) 00:30:39.603 2338.444 - 2353.338: 43.0873% ( 875) 00:30:39.603 2353.338 - 2368.233: 44.5693% ( 737) 00:30:39.603 2368.233 - 2383.127: 46.2043% ( 813) 00:30:39.603 2383.127 - 2398.022: 47.7266% ( 757) 00:30:39.603 2398.022 - 2412.916: 49.1966% ( 731) 00:30:39.603 2412.916 - 2427.811: 50.5580% ( 677) 00:30:39.603 2427.811 - 2442.705: 51.8491% ( 642) 00:30:39.603 2442.705 - 2457.600: 53.0597% ( 602) 00:30:39.603 2457.600 - 2472.495: 54.0974% ( 516) 00:30:39.603 2472.495 - 2487.389: 55.1491% ( 523) 00:30:39.603 2487.389 - 2502.284: 56.1265% ( 486) 00:30:39.603 2502.284 - 2517.178: 57.0535% ( 461) 00:30:39.603 2517.178 - 2532.073: 57.9705% ( 456) 00:30:39.603 2532.073 - 2546.967: 58.7669% ( 396) 00:30:39.603 2546.967 - 2561.862: 59.6396% ( 434) 00:30:39.603 2561.862 - 2576.756: 60.4279% ( 392) 00:30:39.603 2576.756 - 2591.651: 61.2203% ( 394) 00:30:39.603 2591.651 - 2606.545: 61.9663% ( 371) 00:30:39.603 2606.545 - 2621.440: 62.6983% ( 364) 00:30:39.603 2621.440 - 2636.335: 63.4927% ( 395) 00:30:39.603 2636.335 - 2651.229: 64.2568% ( 380) 00:30:39.603 2651.229 - 2666.124: 65.0150% ( 377) 00:30:39.603 2666.124 - 2681.018: 65.7249% ( 353) 00:30:39.603 2681.018 - 2695.913: 66.4548% ( 363) 00:30:39.603 2695.913 - 2710.807: 67.2029% ( 372) 00:30:39.603 2710.807 - 2725.702: 67.8967% ( 345) 00:30:39.603 2725.702 - 2740.596: 68.6106% ( 355) 00:30:39.603 2740.596 - 2755.491: 69.2742% ( 330) 00:30:39.603 2755.491 - 2770.385: 69.9640% ( 343) 00:30:39.603 2770.385 - 2785.280: 70.6779% ( 355) 00:30:39.603 2785.280 - 2800.175: 71.3958% ( 357) 00:30:39.603 2800.175 - 2815.069: 72.1318% ( 366) 00:30:39.603 2815.069 - 2829.964: 72.8296% ( 347) 00:30:39.603 2829.964 - 2844.858: 73.5536% ( 360) 00:30:39.603 2844.858 - 2859.753: 74.2454% ( 344) 00:30:39.603 2859.753 - 2874.647: 74.9693% ( 360) 00:30:39.603 2874.647 - 2889.542: 75.6209% ( 324) 00:30:39.603 2889.542 - 2904.436: 76.2765% ( 326) 00:30:39.603 2904.436 - 2919.331: 76.9200% ( 320) 00:30:39.603 2919.331 - 2934.225: 77.6218% ( 349) 00:30:39.603 2934.225 - 2949.120: 78.3055% ( 340) 00:30:39.603 2949.120 - 2964.015: 79.0013% ( 346) 00:30:39.603 2964.015 - 2978.909: 79.8198% ( 407) 00:30:39.603 2978.909 - 2993.804: 80.4774% ( 327) 00:30:39.603 2993.804 - 3008.698: 81.1451% ( 332) 00:30:39.603 3008.698 - 3023.593: 81.7765% ( 314) 00:30:39.603 3023.593 - 3038.487: 82.3939% ( 307) 00:30:39.603 3038.487 - 3053.382: 82.9952% ( 299) 00:30:39.603 3053.382 - 3068.276: 83.5964% ( 299) 00:30:39.603 3068.276 - 3083.171: 84.1374% ( 269) 00:30:39.603 3083.171 - 3098.065: 84.6160% ( 238) 00:30:39.603 3098.065 - 3112.960: 85.1429% ( 262) 00:30:39.603 3112.960 - 3127.855: 85.6014% ( 228) 00:30:39.603 3127.855 - 3142.749: 85.9774% ( 187) 00:30:39.603 3142.749 - 3157.644: 86.4158% ( 218) 00:30:39.603 3157.644 - 3172.538: 86.7999% ( 191) 00:30:39.603 3172.538 - 3187.433: 87.1599% ( 179) 00:30:39.603 3187.433 - 3202.327: 87.5199% ( 179) 00:30:39.603 3202.327 - 3217.222: 87.8497% ( 164) 00:30:39.603 3217.222 - 3232.116: 88.1835% ( 166) 00:30:39.603 3232.116 - 3247.011: 88.4912% ( 153) 00:30:39.603 3247.011 - 3261.905: 88.8029% ( 155) 00:30:39.603 3261.905 - 3276.800: 89.0965% ( 146) 00:30:39.603 3276.800 - 3291.695: 89.4021% ( 152) 00:30:39.603 3291.695 - 3306.589: 89.6837% ( 140) 00:30:39.603 3306.589 - 3321.484: 89.9572% ( 136) 00:30:39.603 3321.484 - 3336.378: 90.2126% ( 127) 00:30:39.603 3336.378 - 3351.273: 90.4760% ( 131) 00:30:39.603 3351.273 - 3366.167: 90.7595% ( 141) 00:30:39.603 3366.167 - 3381.062: 91.0230% ( 131) 00:30:39.603 3381.062 - 3395.956: 91.2824% ( 129) 00:30:39.603 3395.956 - 3410.851: 91.5117% ( 114) 00:30:39.603 3410.851 - 3425.745: 91.7570% ( 122) 00:30:39.603 3425.745 - 3440.640: 92.0064% ( 124) 00:30:39.603 3440.640 - 3455.535: 92.2175% ( 105) 00:30:39.603 3455.535 - 3470.429: 92.4468% ( 114) 00:30:39.603 3470.429 - 3485.324: 92.6700% ( 111) 00:30:39.603 3485.324 - 3500.218: 92.8771% ( 103) 00:30:39.603 3500.218 - 3515.113: 93.1003% ( 111) 00:30:39.603 3515.113 - 3530.007: 93.3195% ( 109) 00:30:39.603 3530.007 - 3544.902: 93.5186% ( 99) 00:30:39.603 3544.902 - 3559.796: 93.7197% ( 100) 00:30:39.603 3559.796 - 3574.691: 93.9228% ( 101) 00:30:39.603 3574.691 - 3589.585: 94.1501% ( 113) 00:30:39.603 3589.585 - 3604.480: 94.3391% ( 94) 00:30:39.603 3604.480 - 3619.375: 94.5201% ( 90) 00:30:39.603 3619.375 - 3634.269: 94.6729% ( 76) 00:30:39.603 3634.269 - 3649.164: 94.8599% ( 93) 00:30:39.603 3649.164 - 3664.058: 95.0268% ( 83) 00:30:39.603 3664.058 - 3678.953: 95.1716% ( 72) 00:30:39.603 3678.953 - 3693.847: 95.3184% ( 73) 00:30:39.603 3693.847 - 3708.742: 95.4612% ( 71) 00:30:39.603 3708.742 - 3723.636: 95.6040% ( 71) 00:30:39.603 3723.636 - 3738.531: 95.7488% ( 72) 00:30:39.603 3738.531 - 3753.425: 95.8896% ( 70) 00:30:39.603 3753.425 - 3768.320: 96.0263% ( 68) 00:30:39.603 3768.320 - 3783.215: 96.1631% ( 68) 00:30:39.603 3783.215 - 3798.109: 96.2797% ( 58) 00:30:39.603 3798.109 - 3813.004: 96.4144% ( 67) 00:30:39.603 3813.004 - 3842.793: 96.6638% ( 124) 00:30:39.603 3842.793 - 3872.582: 96.9011% ( 118) 00:30:39.603 3872.582 - 3902.371: 97.1424% ( 120) 00:30:39.603 3902.371 - 3932.160: 97.3515% ( 104) 00:30:39.603 3932.160 - 3961.949: 97.5526% ( 100) 00:30:39.603 3961.949 - 3991.738: 97.7497% ( 98) 00:30:39.603 3991.738 - 4021.527: 97.9227% ( 86) 00:30:39.603 4021.527 - 4051.316: 98.0876% ( 82) 00:30:39.603 4051.316 - 4081.105: 98.2585% ( 85) 00:30:39.603 4081.105 - 4110.895: 98.4174% ( 79) 00:30:39.603 4110.895 - 4140.684: 98.5441% ( 63) 00:30:39.603 4140.684 - 4170.473: 98.6848% ( 70) 00:30:39.603 4170.473 - 4200.262: 98.7914% ( 53) 00:30:39.603 4200.262 - 4230.051: 98.8940% ( 51) 00:30:39.603 4230.051 - 4259.840: 99.0026% ( 54) 00:30:39.603 4259.840 - 4289.629: 99.0971% ( 47) 00:30:39.603 4289.629 - 4319.418: 99.1896% ( 46) 00:30:39.603 4319.418 - 4349.207: 99.2700% ( 40) 00:30:39.603 4349.207 - 4378.996: 99.3424% ( 36) 00:30:39.603 4378.996 - 4408.785: 99.4088% ( 33) 00:30:39.603 4408.785 - 4438.575: 99.4631% ( 27) 00:30:39.603 4438.575 - 4468.364: 99.5194% ( 28) 00:30:39.603 4468.364 - 4498.153: 99.5697% ( 25) 00:30:39.603 4498.153 - 4527.942: 99.6079% ( 19) 00:30:39.603 4527.942 - 4557.731: 99.6360% ( 14) 00:30:39.603 4557.731 - 4587.520: 99.6702% ( 17) 00:30:39.603 4587.520 - 4617.309: 99.7024% ( 16) 00:30:39.603 4617.309 - 4647.098: 99.7185% ( 8) 00:30:39.603 4647.098 - 4676.887: 99.7386% ( 10) 00:30:39.603 4676.887 - 4706.676: 99.7587% ( 10) 00:30:39.603 4706.676 - 4736.465: 99.7748% ( 8) 00:30:39.603 4736.465 - 4766.255: 99.7909% ( 8) 00:30:39.603 4766.255 - 4796.044: 99.8069% ( 8) 00:30:39.603 4796.044 - 4825.833: 99.8150% ( 4) 00:30:39.603 4825.833 - 4855.622: 99.8190% ( 2) 00:30:39.603 4855.622 - 4885.411: 99.8250% ( 3) 00:30:39.603 4885.411 - 4915.200: 99.8311% ( 3) 00:30:39.603 4915.200 - 4944.989: 99.8351% ( 2) 00:30:39.603 4944.989 - 4974.778: 99.8391% ( 2) 00:30:39.603 4974.778 - 5004.567: 99.8452% ( 3) 00:30:39.603 5004.567 - 5034.356: 99.8492% ( 2) 00:30:39.603 5034.356 - 5064.145: 99.8552% ( 3) 00:30:39.603 5064.145 - 5093.935: 99.8612% ( 3) 00:30:39.603 5093.935 - 5123.724: 99.8673% ( 3) 00:30:39.604 5123.724 - 5153.513: 99.8713% ( 2) 00:30:39.604 5153.513 - 5183.302: 99.8733% ( 1) 00:30:39.604 5183.302 - 5213.091: 99.8753% ( 1) 00:30:39.604 5213.091 - 5242.880: 99.8773% ( 1) 00:30:39.604 5242.880 - 5272.669: 99.8793% ( 1) 00:30:39.604 5272.669 - 5302.458: 99.8814% ( 1) 00:30:39.604 5302.458 - 5332.247: 99.8834% ( 1) 00:30:39.604 5332.247 - 5362.036: 99.8854% ( 1) 00:30:39.604 5362.036 - 5391.825: 99.8874% ( 1) 00:30:39.604 5391.825 - 5421.615: 99.8894% ( 1) 00:30:39.604 5421.615 - 5451.404: 99.8934% ( 2) 00:30:39.604 5451.404 - 5481.193: 99.8954% ( 1) 00:30:39.604 5481.193 - 5510.982: 99.8995% ( 2) 00:30:39.604 5510.982 - 5540.771: 99.9015% ( 1) 00:30:39.604 5540.771 - 5570.560: 99.9035% ( 1) 00:30:39.604 5570.560 - 5600.349: 99.9055% ( 1) 00:30:39.604 5600.349 - 5630.138: 99.9075% ( 1) 00:30:39.604 5630.138 - 5659.927: 99.9095% ( 1) 00:30:39.604 5659.927 - 5689.716: 99.9135% ( 2) 00:30:39.604 5689.716 - 5719.505: 99.9155% ( 1) 00:30:39.604 5719.505 - 5749.295: 99.9175% ( 1) 00:30:39.604 5749.295 - 5779.084: 99.9196% ( 1) 00:30:39.604 5779.084 - 5808.873: 99.9216% ( 1) 00:30:39.604 5808.873 - 5838.662: 99.9236% ( 1) 00:30:39.604 5838.662 - 5868.451: 99.9256% ( 1) 00:30:39.604 5868.451 - 5898.240: 99.9276% ( 1) 00:30:39.604 5898.240 - 5928.029: 99.9316% ( 2) 00:30:39.604 5928.029 - 5957.818: 99.9336% ( 1) 00:30:39.604 5957.818 - 5987.607: 99.9356% ( 1) 00:30:39.604 5987.607 - 6017.396: 99.9377% ( 1) 00:30:39.604 6017.396 - 6047.185: 99.9397% ( 1) 00:30:39.604 6047.185 - 6076.975: 99.9417% ( 1) 00:30:39.604 6076.975 - 6106.764: 99.9437% ( 1) 00:30:39.604 6106.764 - 6136.553: 99.9457% ( 1) 00:30:39.604 6136.553 - 6166.342: 99.9477% ( 1) 00:30:39.604 6166.342 - 6196.131: 99.9517% ( 2) 00:30:39.604 6196.131 - 6225.920: 99.9537% ( 1) 00:30:39.604 6225.920 - 6255.709: 99.9558% ( 1) 00:30:39.604 6255.709 - 6285.498: 99.9578% ( 1) 00:30:39.604 6285.498 - 6315.287: 99.9598% ( 1) 00:30:39.604 6315.287 - 6345.076: 99.9618% ( 1) 00:30:39.604 6345.076 - 6374.865: 99.9638% ( 1) 00:30:39.604 6404.655 - 6434.444: 99.9658% ( 1) 00:30:39.604 6434.444 - 6464.233: 99.9678% ( 1) 00:30:39.604 6762.124 - 6791.913: 99.9718% ( 2) 00:30:39.604 6821.702 - 6851.491: 99.9739% ( 1) 00:30:39.604 6851.491 - 6881.280: 99.9759% ( 1) 00:30:39.604 6940.858 - 6970.647: 99.9799% ( 2) 00:30:39.604 6970.647 - 7000.436: 99.9819% ( 1) 00:30:39.604 7000.436 - 7030.225: 99.9839% ( 1) 00:30:39.604 8519.680 - 8579.258: 99.9859% ( 1) 00:30:39.604 10426.182 - 10485.760: 99.9879% ( 1) 00:30:39.604 10485.760 - 10545.338: 99.9980% ( 5) 00:30:39.604 10545.338 - 10604.916: 100.0000% ( 1) 00:30:39.604 00:30:39.604 16:46:16 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:39.604 ************************************ 00:30:39.604 END TEST nvme_perf 00:30:39.604 ************************************ 00:30:39.604 00:30:39.604 real 0m2.623s 00:30:39.604 user 0m2.255s 00:30:39.604 sys 0m0.215s 00:30:39.604 16:46:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:39.604 16:46:16 -- common/autotest_common.sh@10 -- # set +x 00:30:39.604 16:46:16 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:39.604 16:46:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:30:39.604 16:46:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:39.604 16:46:16 -- common/autotest_common.sh@10 -- # set +x 00:30:39.604 ************************************ 00:30:39.604 START TEST nvme_hello_world 00:30:39.604 ************************************ 00:30:39.604 16:46:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:39.862 Initializing NVMe Controllers 00:30:39.862 Attached to 0000:00:06.0 00:30:39.862 Namespace ID: 1 size: 5GB 00:30:39.862 Initialization complete. 00:30:39.862 INFO: using host memory buffer for IO 00:30:39.862 Hello world! 00:30:39.862 00:30:39.862 real 0m0.288s 00:30:39.862 user 0m0.103s 00:30:39.862 sys 0m0.099s 00:30:39.862 16:46:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:39.862 16:46:16 -- common/autotest_common.sh@10 -- # set +x 00:30:39.862 ************************************ 00:30:39.862 END TEST nvme_hello_world 00:30:39.862 ************************************ 00:30:39.862 16:46:16 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:39.862 16:46:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:39.862 16:46:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:39.862 16:46:16 -- common/autotest_common.sh@10 -- # set +x 00:30:39.862 ************************************ 00:30:39.862 START TEST nvme_sgl 00:30:39.862 ************************************ 00:30:39.862 16:46:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:40.121 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:30:40.121 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:30:40.121 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:30:40.379 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:30:40.379 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:30:40.379 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:30:40.379 NVMe Readv/Writev Request test 00:30:40.379 Attached to 0000:00:06.0 00:30:40.379 0000:00:06.0: build_io_request_2 test passed 00:30:40.379 0000:00:06.0: build_io_request_4 test passed 00:30:40.379 0000:00:06.0: build_io_request_5 test passed 00:30:40.379 0000:00:06.0: build_io_request_6 test passed 00:30:40.379 0000:00:06.0: build_io_request_7 test passed 00:30:40.379 0000:00:06.0: build_io_request_10 test passed 00:30:40.379 Cleaning up... 00:30:40.379 00:30:40.379 real 0m0.415s 00:30:40.379 user 0m0.202s 00:30:40.379 sys 0m0.132s 00:30:40.379 16:46:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.379 16:46:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.379 ************************************ 00:30:40.379 END TEST nvme_sgl 00:30:40.379 ************************************ 00:30:40.379 16:46:17 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:40.379 16:46:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:40.380 16:46:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:40.380 16:46:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.380 ************************************ 00:30:40.380 START TEST nvme_e2edp 00:30:40.380 ************************************ 00:30:40.380 16:46:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:40.638 NVMe Write/Read with End-to-End data protection test 00:30:40.638 Attached to 0000:00:06.0 00:30:40.638 Cleaning up... 00:30:40.638 00:30:40.638 real 0m0.313s 00:30:40.638 user 0m0.101s 00:30:40.638 sys 0m0.137s 00:30:40.638 16:46:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.638 16:46:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.638 ************************************ 00:30:40.638 END TEST nvme_e2edp 00:30:40.638 ************************************ 00:30:40.638 16:46:17 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:40.638 16:46:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:40.638 16:46:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:40.638 16:46:17 -- common/autotest_common.sh@10 -- # set +x 00:30:40.897 ************************************ 00:30:40.897 START TEST nvme_reserve 00:30:40.897 ************************************ 00:30:40.897 16:46:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:41.156 ===================================================== 00:30:41.156 NVMe Controller at PCI bus 0, device 6, function 0 00:30:41.156 ===================================================== 00:30:41.156 Reservations: Not Supported 00:30:41.156 Reservation test passed 00:30:41.156 00:30:41.156 real 0m0.314s 00:30:41.156 user 0m0.083s 00:30:41.156 sys 0m0.154s 00:30:41.156 16:46:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.156 ************************************ 00:30:41.156 16:46:17 -- common/autotest_common.sh@10 -- # set +x 00:30:41.156 END TEST nvme_reserve 00:30:41.156 ************************************ 00:30:41.156 16:46:17 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:41.156 16:46:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:41.156 16:46:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:41.156 16:46:17 -- common/autotest_common.sh@10 -- # set +x 00:30:41.156 ************************************ 00:30:41.156 START TEST nvme_err_injection 00:30:41.156 ************************************ 00:30:41.156 16:46:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:41.415 NVMe Error Injection test 00:30:41.415 Attached to 0000:00:06.0 00:30:41.415 0000:00:06.0: get features failed as expected 00:30:41.415 0000:00:06.0: get features successfully as expected 00:30:41.415 0000:00:06.0: read failed as expected 00:30:41.415 0000:00:06.0: read successfully as expected 00:30:41.415 Cleaning up... 00:30:41.415 00:30:41.415 real 0m0.321s 00:30:41.415 user 0m0.108s 00:30:41.415 sys 0m0.135s 00:30:41.415 16:46:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:41.415 16:46:18 -- common/autotest_common.sh@10 -- # set +x 00:30:41.415 ************************************ 00:30:41.415 END TEST nvme_err_injection 00:30:41.415 ************************************ 00:30:41.415 16:46:18 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:41.415 16:46:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:30:41.415 16:46:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:41.415 16:46:18 -- common/autotest_common.sh@10 -- # set +x 00:30:41.415 ************************************ 00:30:41.415 START TEST nvme_overhead 00:30:41.415 ************************************ 00:30:41.415 16:46:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:42.792 Initializing NVMe Controllers 00:30:42.792 Attached to 0000:00:06.0 00:30:42.792 Initialization complete. Launching workers. 00:30:42.792 submit (in ns) avg, min, max = 14548.5, 11146.4, 186120.9 00:30:42.792 complete (in ns) avg, min, max = 10247.1, 7890.0, 1220047.3 00:30:42.792 00:30:42.792 Submit histogram 00:30:42.792 ================ 00:30:42.792 Range in us Cumulative Count 00:30:42.792 11.113 - 11.171: 0.0114% ( 1) 00:30:42.792 11.171 - 11.229: 0.0342% ( 2) 00:30:42.792 11.229 - 11.287: 0.0683% ( 3) 00:30:42.792 11.287 - 11.345: 0.1139% ( 4) 00:30:42.792 11.345 - 11.404: 0.3757% ( 23) 00:30:42.792 11.404 - 11.462: 1.1955% ( 72) 00:30:42.792 11.462 - 11.520: 3.0172% ( 160) 00:30:42.792 11.520 - 11.578: 4.4973% ( 130) 00:30:42.792 11.578 - 11.636: 5.4309% ( 82) 00:30:42.792 11.636 - 11.695: 6.3987% ( 85) 00:30:42.792 11.695 - 11.753: 9.0402% ( 232) 00:30:42.792 11.753 - 11.811: 13.5603% ( 397) 00:30:42.792 11.811 - 11.869: 17.0215% ( 304) 00:30:42.792 11.869 - 11.927: 18.7521% ( 152) 00:30:42.792 11.927 - 11.985: 19.7313% ( 86) 00:30:42.792 11.985 - 12.044: 21.5530% ( 160) 00:30:42.792 12.044 - 12.102: 24.5246% ( 261) 00:30:42.792 12.102 - 12.160: 27.3369% ( 247) 00:30:42.792 12.160 - 12.218: 28.6462% ( 115) 00:30:42.792 12.218 - 12.276: 29.2725% ( 55) 00:30:42.792 12.276 - 12.335: 29.6823% ( 36) 00:30:42.792 12.335 - 12.393: 30.7070% ( 90) 00:30:42.792 12.393 - 12.451: 31.4927% ( 69) 00:30:42.792 12.451 - 12.509: 32.0961% ( 53) 00:30:42.792 12.509 - 12.567: 32.6654% ( 50) 00:30:42.792 12.567 - 12.625: 33.1550% ( 43) 00:30:42.792 12.625 - 12.684: 33.9747% ( 72) 00:30:42.792 12.684 - 12.742: 35.1361% ( 102) 00:30:42.792 12.742 - 12.800: 36.9919% ( 163) 00:30:42.792 12.800 - 12.858: 40.0547% ( 269) 00:30:42.792 12.858 - 12.916: 43.6411% ( 315) 00:30:42.792 12.916 - 12.975: 46.6925% ( 268) 00:30:42.792 12.975 - 13.033: 49.6300% ( 258) 00:30:42.792 13.033 - 13.091: 53.8085% ( 367) 00:30:42.792 13.091 - 13.149: 59.0573% ( 461) 00:30:42.792 13.149 - 13.207: 64.4085% ( 470) 00:30:42.792 13.207 - 13.265: 67.9950% ( 315) 00:30:42.792 13.265 - 13.324: 71.1033% ( 273) 00:30:42.792 13.324 - 13.382: 73.3235% ( 195) 00:30:42.792 13.382 - 13.440: 75.4640% ( 188) 00:30:42.792 13.440 - 13.498: 77.2857% ( 160) 00:30:42.792 13.498 - 13.556: 78.6747% ( 122) 00:30:42.792 13.556 - 13.615: 79.9271% ( 110) 00:30:42.792 13.615 - 13.673: 81.0999% ( 103) 00:30:42.792 13.673 - 13.731: 81.8171% ( 63) 00:30:42.792 13.731 - 13.789: 82.3181% ( 44) 00:30:42.792 13.789 - 13.847: 82.8874% ( 50) 00:30:42.792 13.847 - 13.905: 83.3884% ( 44) 00:30:42.792 13.905 - 13.964: 83.7527% ( 32) 00:30:42.792 13.964 - 14.022: 84.1740% ( 37) 00:30:42.792 14.022 - 14.080: 84.6522% ( 42) 00:30:42.792 14.080 - 14.138: 85.0621% ( 36) 00:30:42.792 14.138 - 14.196: 85.4947% ( 38) 00:30:42.792 14.196 - 14.255: 85.8249% ( 29) 00:30:42.792 14.255 - 14.313: 86.0981% ( 24) 00:30:42.792 14.313 - 14.371: 86.3372% ( 21) 00:30:42.792 14.371 - 14.429: 86.5536% ( 19) 00:30:42.792 14.429 - 14.487: 86.7130% ( 14) 00:30:42.792 14.487 - 14.545: 86.8610% ( 13) 00:30:42.792 14.545 - 14.604: 87.0090% ( 13) 00:30:42.792 14.604 - 14.662: 87.1115% ( 9) 00:30:42.792 14.662 - 14.720: 87.2026% ( 8) 00:30:42.792 14.720 - 14.778: 87.3278% ( 11) 00:30:42.792 14.778 - 14.836: 87.4530% ( 11) 00:30:42.792 14.836 - 14.895: 87.5213% ( 6) 00:30:42.792 14.895 - 15.011: 87.7718% ( 22) 00:30:42.792 15.011 - 15.127: 87.8629% ( 8) 00:30:42.792 15.127 - 15.244: 87.9426% ( 7) 00:30:42.792 15.244 - 15.360: 87.9768% ( 3) 00:30:42.792 15.360 - 15.476: 88.0565% ( 7) 00:30:42.792 15.476 - 15.593: 88.1134% ( 5) 00:30:42.792 15.593 - 15.709: 88.1703% ( 5) 00:30:42.792 15.709 - 15.825: 88.2159% ( 4) 00:30:42.792 15.825 - 15.942: 88.2500% ( 3) 00:30:42.792 15.942 - 16.058: 88.3070% ( 5) 00:30:42.792 16.058 - 16.175: 88.3525% ( 4) 00:30:42.792 16.175 - 16.291: 88.3639% ( 1) 00:30:42.792 16.291 - 16.407: 88.4094% ( 4) 00:30:42.792 16.407 - 16.524: 88.4208% ( 1) 00:30:42.792 16.524 - 16.640: 88.4322% ( 1) 00:30:42.792 16.989 - 17.105: 88.4436% ( 1) 00:30:42.792 17.222 - 17.338: 88.4550% ( 1) 00:30:42.792 17.338 - 17.455: 88.4664% ( 1) 00:30:42.792 17.455 - 17.571: 88.4777% ( 1) 00:30:42.792 17.571 - 17.687: 88.4891% ( 1) 00:30:42.792 17.687 - 17.804: 88.5005% ( 1) 00:30:42.792 17.804 - 17.920: 88.5119% ( 1) 00:30:42.792 18.153 - 18.269: 88.5347% ( 2) 00:30:42.792 18.385 - 18.502: 88.5461% ( 1) 00:30:42.792 18.502 - 18.618: 88.5574% ( 1) 00:30:42.792 18.618 - 18.735: 88.5802% ( 2) 00:30:42.792 18.851 - 18.967: 88.5916% ( 1) 00:30:42.792 18.967 - 19.084: 88.6144% ( 2) 00:30:42.792 19.084 - 19.200: 88.6485% ( 3) 00:30:42.792 19.200 - 19.316: 88.6599% ( 1) 00:30:42.792 19.316 - 19.433: 88.6713% ( 1) 00:30:42.792 19.433 - 19.549: 88.6827% ( 1) 00:30:42.792 19.549 - 19.665: 88.6941% ( 1) 00:30:42.792 19.665 - 19.782: 88.7055% ( 1) 00:30:42.792 19.782 - 19.898: 88.7168% ( 1) 00:30:42.792 20.131 - 20.247: 88.7282% ( 1) 00:30:42.792 20.247 - 20.364: 88.7396% ( 1) 00:30:42.792 20.364 - 20.480: 88.7624% ( 2) 00:30:42.792 20.713 - 20.829: 88.7738% ( 1) 00:30:42.792 21.062 - 21.178: 88.7852% ( 1) 00:30:42.792 21.178 - 21.295: 88.7965% ( 1) 00:30:42.792 21.295 - 21.411: 88.8307% ( 3) 00:30:42.792 21.411 - 21.527: 88.8535% ( 2) 00:30:42.792 21.644 - 21.760: 88.8762% ( 2) 00:30:42.792 21.760 - 21.876: 88.8990% ( 2) 00:30:42.792 21.993 - 22.109: 88.9218% ( 2) 00:30:42.792 22.109 - 22.225: 88.9332% ( 1) 00:30:42.792 22.225 - 22.342: 88.9446% ( 1) 00:30:42.792 22.575 - 22.691: 88.9559% ( 1) 00:30:42.792 22.807 - 22.924: 88.9673% ( 1) 00:30:42.792 22.924 - 23.040: 88.9787% ( 1) 00:30:42.792 23.156 - 23.273: 88.9901% ( 1) 00:30:42.792 23.505 - 23.622: 89.0015% ( 1) 00:30:42.792 23.971 - 24.087: 89.0129% ( 1) 00:30:42.792 24.204 - 24.320: 89.0243% ( 1) 00:30:42.792 24.320 - 24.436: 89.0356% ( 1) 00:30:42.792 24.785 - 24.902: 89.0470% ( 1) 00:30:42.792 25.018 - 25.135: 89.0698% ( 2) 00:30:42.792 25.135 - 25.251: 89.0926% ( 2) 00:30:42.792 25.716 - 25.833: 89.1040% ( 1) 00:30:42.792 25.949 - 26.065: 89.1950% ( 8) 00:30:42.792 26.065 - 26.182: 89.2975% ( 9) 00:30:42.792 26.182 - 26.298: 89.4683% ( 15) 00:30:42.792 26.298 - 26.415: 89.7188% ( 22) 00:30:42.792 26.415 - 26.531: 89.9579% ( 21) 00:30:42.792 26.531 - 26.647: 90.2881% ( 29) 00:30:42.792 26.647 - 26.764: 90.6410% ( 31) 00:30:42.792 26.764 - 26.880: 90.9370% ( 26) 00:30:42.792 26.880 - 26.996: 91.2672% ( 29) 00:30:42.792 26.996 - 27.113: 91.6429% ( 33) 00:30:42.792 27.113 - 27.229: 92.0073% ( 32) 00:30:42.792 27.229 - 27.345: 92.5083% ( 44) 00:30:42.792 27.345 - 27.462: 92.9068% ( 35) 00:30:42.792 27.462 - 27.578: 93.3736% ( 41) 00:30:42.792 27.578 - 27.695: 93.8176% ( 39) 00:30:42.792 27.695 - 27.811: 94.3983% ( 51) 00:30:42.792 27.811 - 27.927: 94.9106% ( 45) 00:30:42.792 27.927 - 28.044: 95.5938% ( 60) 00:30:42.792 28.044 - 28.160: 96.0720% ( 42) 00:30:42.792 28.160 - 28.276: 96.5615% ( 43) 00:30:42.792 28.276 - 28.393: 97.0397% ( 42) 00:30:42.792 28.393 - 28.509: 97.3813% ( 30) 00:30:42.792 28.509 - 28.625: 97.8140% ( 38) 00:30:42.792 28.625 - 28.742: 98.1328% ( 28) 00:30:42.792 28.742 - 28.858: 98.4743% ( 30) 00:30:42.792 28.858 - 28.975: 98.7362% ( 23) 00:30:42.792 28.975 - 29.091: 98.8956% ( 14) 00:30:42.792 29.091 - 29.207: 98.9525% ( 5) 00:30:42.792 29.207 - 29.324: 99.0664% ( 10) 00:30:42.792 29.324 - 29.440: 99.1233% ( 5) 00:30:42.792 29.440 - 29.556: 99.1575% ( 3) 00:30:42.792 29.556 - 29.673: 99.2372% ( 7) 00:30:42.792 29.673 - 29.789: 99.2599% ( 2) 00:30:42.792 29.789 - 30.022: 99.3055% ( 4) 00:30:42.792 30.022 - 30.255: 99.3282% ( 2) 00:30:42.792 30.255 - 30.487: 99.3624% ( 3) 00:30:42.792 30.487 - 30.720: 99.4307% ( 6) 00:30:42.792 30.720 - 30.953: 99.4421% ( 1) 00:30:42.792 30.953 - 31.185: 99.4763% ( 3) 00:30:42.792 31.185 - 31.418: 99.4990% ( 2) 00:30:42.792 31.651 - 31.884: 99.5104% ( 1) 00:30:42.792 32.116 - 32.349: 99.5218% ( 1) 00:30:42.792 32.349 - 32.582: 99.5332% ( 1) 00:30:42.792 32.815 - 33.047: 99.5446% ( 1) 00:30:42.792 33.047 - 33.280: 99.5673% ( 2) 00:30:42.792 33.280 - 33.513: 99.5787% ( 1) 00:30:42.792 33.513 - 33.745: 99.5901% ( 1) 00:30:42.792 33.745 - 33.978: 99.6015% ( 1) 00:30:42.792 34.444 - 34.676: 99.6243% ( 2) 00:30:42.792 34.909 - 35.142: 99.6357% ( 1) 00:30:42.792 35.142 - 35.375: 99.6470% ( 1) 00:30:42.792 35.607 - 35.840: 99.6584% ( 1) 00:30:42.792 35.840 - 36.073: 99.6698% ( 1) 00:30:42.792 36.073 - 36.305: 99.7040% ( 3) 00:30:42.792 37.004 - 37.236: 99.7154% ( 1) 00:30:42.792 38.400 - 38.633: 99.7267% ( 1) 00:30:42.792 38.865 - 39.098: 99.7381% ( 1) 00:30:42.793 39.564 - 39.796: 99.7495% ( 1) 00:30:42.793 40.029 - 40.262: 99.7723% ( 2) 00:30:42.793 42.589 - 42.822: 99.7837% ( 1) 00:30:42.793 43.055 - 43.287: 99.8178% ( 3) 00:30:42.793 43.520 - 43.753: 99.8520% ( 3) 00:30:42.793 44.218 - 44.451: 99.8634% ( 1) 00:30:42.793 44.451 - 44.684: 99.8748% ( 1) 00:30:42.793 44.684 - 44.916: 99.8861% ( 1) 00:30:42.793 48.175 - 48.407: 99.8975% ( 1) 00:30:42.793 48.640 - 48.873: 99.9089% ( 1) 00:30:42.793 49.338 - 49.571: 99.9203% ( 1) 00:30:42.793 50.735 - 50.967: 99.9317% ( 1) 00:30:42.793 53.295 - 53.527: 99.9431% ( 1) 00:30:42.793 54.924 - 55.156: 99.9545% ( 1) 00:30:42.793 61.905 - 62.371: 99.9658% ( 1) 00:30:42.793 68.887 - 69.353: 99.9772% ( 1) 00:30:42.793 100.538 - 101.004: 99.9886% ( 1) 00:30:42.793 185.251 - 186.182: 100.0000% ( 1) 00:30:42.793 00:30:42.793 Complete histogram 00:30:42.793 ================== 00:30:42.793 Range in us Cumulative Count 00:30:42.793 7.855 - 7.913: 0.0114% ( 1) 00:30:42.793 7.913 - 7.971: 0.1139% ( 9) 00:30:42.793 7.971 - 8.029: 0.5579% ( 39) 00:30:42.793 8.029 - 8.087: 1.4915% ( 82) 00:30:42.793 8.087 - 8.145: 3.0855% ( 140) 00:30:42.793 8.145 - 8.204: 5.4765% ( 210) 00:30:42.793 8.204 - 8.262: 8.4481% ( 261) 00:30:42.793 8.262 - 8.320: 12.9568% ( 396) 00:30:42.793 8.320 - 8.378: 20.7788% ( 687) 00:30:42.793 8.378 - 8.436: 27.9973% ( 634) 00:30:42.793 8.436 - 8.495: 33.7015% ( 501) 00:30:42.793 8.495 - 8.553: 39.4284% ( 503) 00:30:42.793 8.553 - 8.611: 49.3339% ( 870) 00:30:42.793 8.611 - 8.669: 56.9054% ( 665) 00:30:42.793 8.669 - 8.727: 61.8240% ( 432) 00:30:42.793 8.727 - 8.785: 65.5015% ( 323) 00:30:42.793 8.785 - 8.844: 70.8300% ( 468) 00:30:42.793 8.844 - 8.902: 74.8833% ( 356) 00:30:42.793 8.902 - 8.960: 76.9099% ( 178) 00:30:42.793 8.960 - 9.018: 78.3901% ( 130) 00:30:42.793 9.018 - 9.076: 80.3029% ( 168) 00:30:42.793 9.076 - 9.135: 81.9879% ( 148) 00:30:42.793 9.135 - 9.193: 83.1834% ( 105) 00:30:42.793 9.193 - 9.251: 83.8552% ( 59) 00:30:42.793 9.251 - 9.309: 84.4017% ( 48) 00:30:42.793 9.309 - 9.367: 84.9937% ( 52) 00:30:42.793 9.367 - 9.425: 85.3467% ( 31) 00:30:42.793 9.425 - 9.484: 85.6086% ( 23) 00:30:42.793 9.484 - 9.542: 85.9046% ( 26) 00:30:42.793 9.542 - 9.600: 86.1778% ( 24) 00:30:42.793 9.600 - 9.658: 86.4056% ( 20) 00:30:42.793 9.658 - 9.716: 86.6105% ( 18) 00:30:42.793 9.716 - 9.775: 86.9179% ( 27) 00:30:42.793 9.775 - 9.833: 87.1456% ( 20) 00:30:42.793 9.833 - 9.891: 87.4416% ( 26) 00:30:42.793 9.891 - 9.949: 87.6921% ( 22) 00:30:42.793 9.949 - 10.007: 87.9085% ( 19) 00:30:42.793 10.007 - 10.065: 88.0565% ( 13) 00:30:42.793 10.065 - 10.124: 88.2728% ( 19) 00:30:42.793 10.124 - 10.182: 88.4322% ( 14) 00:30:42.793 10.182 - 10.240: 88.6258% ( 17) 00:30:42.793 10.240 - 10.298: 88.7396% ( 10) 00:30:42.793 10.298 - 10.356: 88.8421% ( 9) 00:30:42.793 10.356 - 10.415: 88.9559% ( 10) 00:30:42.793 10.415 - 10.473: 89.0015% ( 4) 00:30:42.793 10.473 - 10.531: 89.1267% ( 11) 00:30:42.793 10.531 - 10.589: 89.2406% ( 10) 00:30:42.793 10.589 - 10.647: 89.3317% ( 8) 00:30:42.793 10.647 - 10.705: 89.4569% ( 11) 00:30:42.793 10.705 - 10.764: 89.6163% ( 14) 00:30:42.793 10.764 - 10.822: 89.6618% ( 4) 00:30:42.793 10.822 - 10.880: 89.7757% ( 10) 00:30:42.793 10.880 - 10.938: 89.8440% ( 6) 00:30:42.793 10.938 - 10.996: 89.8782% ( 3) 00:30:42.793 10.996 - 11.055: 89.9009% ( 2) 00:30:42.793 11.055 - 11.113: 89.9237% ( 2) 00:30:42.793 11.113 - 11.171: 89.9351% ( 1) 00:30:42.793 11.171 - 11.229: 89.9693% ( 3) 00:30:42.793 11.229 - 11.287: 89.9920% ( 2) 00:30:42.793 11.345 - 11.404: 90.0034% ( 1) 00:30:42.793 11.462 - 11.520: 90.0262% ( 2) 00:30:42.793 11.520 - 11.578: 90.0376% ( 1) 00:30:42.793 11.578 - 11.636: 90.0603% ( 2) 00:30:42.793 11.753 - 11.811: 90.0717% ( 1) 00:30:42.793 12.044 - 12.102: 90.0831% ( 1) 00:30:42.793 12.102 - 12.160: 90.0945% ( 1) 00:30:42.793 12.276 - 12.335: 90.1059% ( 1) 00:30:42.793 12.567 - 12.625: 90.1400% ( 3) 00:30:42.793 12.684 - 12.742: 90.1514% ( 1) 00:30:42.793 12.800 - 12.858: 90.1628% ( 1) 00:30:42.793 12.916 - 12.975: 90.1742% ( 1) 00:30:42.793 13.324 - 13.382: 90.1856% ( 1) 00:30:42.793 13.382 - 13.440: 90.1970% ( 1) 00:30:42.793 13.556 - 13.615: 90.2084% ( 1) 00:30:42.793 13.731 - 13.789: 90.2197% ( 1) 00:30:42.793 13.847 - 13.905: 90.2425% ( 2) 00:30:42.793 13.905 - 13.964: 90.2539% ( 1) 00:30:42.793 14.022 - 14.080: 90.2881% ( 3) 00:30:42.793 14.080 - 14.138: 90.2994% ( 1) 00:30:42.793 14.313 - 14.371: 90.3222% ( 2) 00:30:42.793 14.371 - 14.429: 90.3336% ( 1) 00:30:42.793 14.545 - 14.604: 90.3450% ( 1) 00:30:42.793 14.604 - 14.662: 90.3564% ( 1) 00:30:42.793 14.836 - 14.895: 90.3905% ( 3) 00:30:42.793 14.895 - 15.011: 90.4361% ( 4) 00:30:42.793 15.011 - 15.127: 90.4702% ( 3) 00:30:42.793 15.127 - 15.244: 90.4930% ( 2) 00:30:42.793 15.244 - 15.360: 90.5272% ( 3) 00:30:42.793 15.360 - 15.476: 90.5727% ( 4) 00:30:42.793 15.476 - 15.593: 90.5841% ( 1) 00:30:42.793 15.593 - 15.709: 90.6069% ( 2) 00:30:42.793 15.709 - 15.825: 90.6296% ( 2) 00:30:42.793 15.825 - 15.942: 90.6752% ( 4) 00:30:42.793 15.942 - 16.058: 90.7093% ( 3) 00:30:42.793 16.058 - 16.175: 90.7321% ( 2) 00:30:42.793 16.175 - 16.291: 90.7663% ( 3) 00:30:42.793 16.291 - 16.407: 90.8118% ( 4) 00:30:42.793 16.407 - 16.524: 90.8232% ( 1) 00:30:42.793 16.524 - 16.640: 90.8573% ( 3) 00:30:42.793 16.640 - 16.756: 90.8801% ( 2) 00:30:42.793 17.105 - 17.222: 90.8915% ( 1) 00:30:42.793 17.222 - 17.338: 90.9143% ( 2) 00:30:42.793 17.338 - 17.455: 90.9257% ( 1) 00:30:42.793 17.455 - 17.571: 90.9370% ( 1) 00:30:42.793 17.920 - 18.036: 90.9484% ( 1) 00:30:42.793 18.153 - 18.269: 90.9712% ( 2) 00:30:42.793 18.385 - 18.502: 90.9826% ( 1) 00:30:42.793 18.618 - 18.735: 90.9940% ( 1) 00:30:42.793 18.735 - 18.851: 91.0054% ( 1) 00:30:42.793 19.898 - 20.015: 91.0281% ( 2) 00:30:42.793 20.131 - 20.247: 91.0395% ( 1) 00:30:42.793 20.364 - 20.480: 91.0509% ( 1) 00:30:42.793 20.480 - 20.596: 91.0737% ( 2) 00:30:42.793 20.596 - 20.713: 91.0851% ( 1) 00:30:42.793 20.829 - 20.945: 91.0964% ( 1) 00:30:42.793 21.062 - 21.178: 91.1078% ( 1) 00:30:42.793 21.527 - 21.644: 91.1192% ( 1) 00:30:42.793 22.225 - 22.342: 91.1534% ( 3) 00:30:42.793 22.342 - 22.458: 91.1989% ( 4) 00:30:42.793 22.458 - 22.575: 91.2786% ( 7) 00:30:42.793 22.575 - 22.691: 91.5063% ( 20) 00:30:42.793 22.691 - 22.807: 91.8023% ( 26) 00:30:42.793 22.807 - 22.924: 92.2350% ( 38) 00:30:42.793 22.924 - 23.040: 92.7132% ( 42) 00:30:42.793 23.040 - 23.156: 93.4760% ( 67) 00:30:42.793 23.156 - 23.273: 94.1250% ( 57) 00:30:42.793 23.273 - 23.389: 94.7626% ( 56) 00:30:42.793 23.389 - 23.505: 95.4571% ( 61) 00:30:42.793 23.505 - 23.622: 96.2086% ( 66) 00:30:42.793 23.622 - 23.738: 96.9373% ( 64) 00:30:42.793 23.738 - 23.855: 97.4041% ( 41) 00:30:42.793 23.855 - 23.971: 97.8140% ( 36) 00:30:42.793 23.971 - 24.087: 98.2125% ( 35) 00:30:42.793 24.087 - 24.204: 98.4516% ( 21) 00:30:42.793 24.204 - 24.320: 98.6793% ( 20) 00:30:42.793 24.320 - 24.436: 98.8045% ( 11) 00:30:42.793 24.436 - 24.553: 98.9298% ( 11) 00:30:42.793 24.553 - 24.669: 99.0436% ( 10) 00:30:42.793 24.669 - 24.785: 99.1005% ( 5) 00:30:42.793 24.785 - 24.902: 99.1461% ( 4) 00:30:42.793 24.902 - 25.018: 99.2372% ( 8) 00:30:42.793 25.018 - 25.135: 99.2713% ( 3) 00:30:42.793 25.135 - 25.251: 99.3282% ( 5) 00:30:42.793 25.251 - 25.367: 99.3738% ( 4) 00:30:42.793 25.367 - 25.484: 99.4421% ( 6) 00:30:42.793 25.484 - 25.600: 99.4763% ( 3) 00:30:42.793 25.600 - 25.716: 99.5218% ( 4) 00:30:42.793 25.716 - 25.833: 99.5560% ( 3) 00:30:42.793 25.949 - 26.065: 99.5673% ( 1) 00:30:42.793 26.182 - 26.298: 99.5787% ( 1) 00:30:42.793 26.298 - 26.415: 99.6129% ( 3) 00:30:42.793 26.531 - 26.647: 99.6357% ( 2) 00:30:42.793 26.880 - 26.996: 99.6470% ( 1) 00:30:42.793 27.113 - 27.229: 99.6698% ( 2) 00:30:42.793 27.811 - 27.927: 99.6812% ( 1) 00:30:42.793 27.927 - 28.044: 99.6926% ( 1) 00:30:42.793 29.091 - 29.207: 99.7040% ( 1) 00:30:42.793 29.324 - 29.440: 99.7267% ( 2) 00:30:42.793 30.022 - 30.255: 99.7381% ( 1) 00:30:42.793 30.255 - 30.487: 99.7495% ( 1) 00:30:42.793 30.720 - 30.953: 99.7609% ( 1) 00:30:42.793 30.953 - 31.185: 99.7837% ( 2) 00:30:42.793 31.185 - 31.418: 99.8178% ( 3) 00:30:42.793 31.418 - 31.651: 99.8292% ( 1) 00:30:42.793 31.884 - 32.116: 99.8406% ( 1) 00:30:42.793 32.349 - 32.582: 99.8634% ( 2) 00:30:42.793 32.582 - 32.815: 99.8748% ( 1) 00:30:42.793 32.815 - 33.047: 99.8861% ( 1) 00:30:42.793 33.280 - 33.513: 99.8975% ( 1) 00:30:42.793 38.633 - 38.865: 99.9089% ( 1) 00:30:42.793 38.865 - 39.098: 99.9203% ( 1) 00:30:42.793 39.098 - 39.331: 99.9317% ( 1) 00:30:42.793 40.960 - 41.193: 99.9431% ( 1) 00:30:42.793 46.080 - 46.313: 99.9545% ( 1) 00:30:42.793 51.433 - 51.665: 99.9658% ( 1) 00:30:42.793 66.560 - 67.025: 99.9772% ( 1) 00:30:42.794 102.865 - 103.331: 99.9886% ( 1) 00:30:42.794 1213.905 - 1221.353: 100.0000% ( 1) 00:30:42.794 00:30:42.794 00:30:42.794 real 0m1.333s 00:30:42.794 user 0m1.138s 00:30:42.794 sys 0m0.104s 00:30:42.794 16:46:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:42.794 16:46:19 -- common/autotest_common.sh@10 -- # set +x 00:30:42.794 ************************************ 00:30:42.794 END TEST nvme_overhead 00:30:42.794 ************************************ 00:30:42.794 16:46:19 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:42.794 16:46:19 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:30:42.794 16:46:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:42.794 16:46:19 -- common/autotest_common.sh@10 -- # set +x 00:30:42.794 ************************************ 00:30:42.794 START TEST nvme_arbitration 00:30:42.794 ************************************ 00:30:42.794 16:46:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:47.012 Initializing NVMe Controllers 00:30:47.012 Attached to 0000:00:06.0 00:30:47.012 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:30:47.012 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:30:47.012 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:30:47.012 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:30:47.012 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:30:47.012 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:30:47.012 Initialization complete. Launching workers. 00:30:47.012 Starting thread on core 1 with urgent priority queue 00:30:47.012 Starting thread on core 2 with urgent priority queue 00:30:47.012 Starting thread on core 3 with urgent priority queue 00:30:47.012 Starting thread on core 0 with urgent priority queue 00:30:47.012 QEMU NVMe Ctrl (12340 ) core 0: 1344.00 IO/s 74.40 secs/100000 ios 00:30:47.012 QEMU NVMe Ctrl (12340 ) core 1: 1258.67 IO/s 79.45 secs/100000 ios 00:30:47.012 QEMU NVMe Ctrl (12340 ) core 2: 618.67 IO/s 161.64 secs/100000 ios 00:30:47.012 QEMU NVMe Ctrl (12340 ) core 3: 789.33 IO/s 126.69 secs/100000 ios 00:30:47.012 ======================================================== 00:30:47.012 00:30:47.012 00:30:47.012 real 0m3.475s 00:30:47.012 user 0m9.425s 00:30:47.012 sys 0m0.152s 00:30:47.012 16:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.012 16:46:23 -- common/autotest_common.sh@10 -- # set +x 00:30:47.012 ************************************ 00:30:47.012 END TEST nvme_arbitration 00:30:47.012 ************************************ 00:30:47.012 16:46:23 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:30:47.012 16:46:23 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:30:47.012 16:46:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:47.012 16:46:23 -- common/autotest_common.sh@10 -- # set +x 00:30:47.012 ************************************ 00:30:47.012 START TEST nvme_single_aen 00:30:47.012 ************************************ 00:30:47.012 16:46:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:30:47.012 [2024-07-11 16:46:23.129622] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:47.012 [2024-07-11 16:46:23.129742] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:47.012 [2024-07-11 16:46:23.332852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:47.012 Asynchronous Event Request test 00:30:47.012 Attached to 0000:00:06.0 00:30:47.012 Reset controller to setup AER completions for this process 00:30:47.012 Registering asynchronous event callbacks... 00:30:47.012 Getting orig temperature thresholds of all controllers 00:30:47.012 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:47.012 Setting all controllers temperature threshold low to trigger AER 00:30:47.012 Waiting for all controllers temperature threshold to be set lower 00:30:47.012 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:47.012 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:30:47.012 Waiting for all controllers to trigger AER and reset threshold 00:30:47.012 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:47.012 Cleaning up... 00:30:47.012 00:30:47.012 real 0m0.295s 00:30:47.012 user 0m0.107s 00:30:47.012 sys 0m0.117s 00:30:47.012 16:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.012 ************************************ 00:30:47.012 END TEST nvme_single_aen 00:30:47.012 ************************************ 00:30:47.012 16:46:23 -- common/autotest_common.sh@10 -- # set +x 00:30:47.012 16:46:23 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:30:47.012 16:46:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:47.012 16:46:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:47.012 16:46:23 -- common/autotest_common.sh@10 -- # set +x 00:30:47.012 ************************************ 00:30:47.012 START TEST nvme_doorbell_aers 00:30:47.012 ************************************ 00:30:47.012 16:46:23 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:30:47.012 16:46:23 -- nvme/nvme.sh@70 -- # bdfs=() 00:30:47.012 16:46:23 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:30:47.012 16:46:23 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:30:47.012 16:46:23 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:30:47.012 16:46:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:47.012 16:46:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:47.012 16:46:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:47.012 16:46:23 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:47.012 16:46:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:47.012 16:46:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:47.012 16:46:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:30:47.012 16:46:23 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:47.012 16:46:23 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:30:47.270 [2024-07-11 16:46:23.834914] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143616) is not found. Dropping the request. 00:30:57.236 Executing: test_write_invalid_db 00:30:57.236 Waiting for AER completion... 00:30:57.236 Failure: test_write_invalid_db 00:30:57.236 00:30:57.236 Executing: test_invalid_db_write_overflow_sq 00:30:57.236 Waiting for AER completion... 00:30:57.236 Failure: test_invalid_db_write_overflow_sq 00:30:57.236 00:30:57.236 Executing: test_invalid_db_write_overflow_cq 00:30:57.236 Waiting for AER completion... 00:30:57.236 Failure: test_invalid_db_write_overflow_cq 00:30:57.236 00:30:57.236 00:30:57.236 real 0m10.110s 00:30:57.236 user 0m8.584s 00:30:57.236 sys 0m1.419s 00:30:57.236 16:46:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:57.236 16:46:33 -- common/autotest_common.sh@10 -- # set +x 00:30:57.236 ************************************ 00:30:57.236 END TEST nvme_doorbell_aers 00:30:57.236 ************************************ 00:30:57.236 16:46:33 -- nvme/nvme.sh@97 -- # uname 00:30:57.236 16:46:33 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:30:57.236 16:46:33 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:30:57.236 16:46:33 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:30:57.236 16:46:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:57.236 16:46:33 -- common/autotest_common.sh@10 -- # set +x 00:30:57.236 ************************************ 00:30:57.236 START TEST nvme_multi_aen 00:30:57.236 ************************************ 00:30:57.236 16:46:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:30:57.236 [2024-07-11 16:46:33.642390] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:57.236 [2024-07-11 16:46:33.642696] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.236 [2024-07-11 16:46:33.838680] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:30:57.236 [2024-07-11 16:46:33.838763] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143616) is not found. Dropping the request. 00:30:57.236 [2024-07-11 16:46:33.838879] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143616) is not found. Dropping the request. 00:30:57.236 [2024-07-11 16:46:33.838936] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143616) is not found. Dropping the request. 00:30:57.236 [2024-07-11 16:46:33.845379] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:30:57.236 Child process pid: 143818 00:30:57.236 [2024-07-11 16:46:33.845503] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.493 [Child] Asynchronous Event Request test 00:30:57.493 [Child] Attached to 0000:00:06.0 00:30:57.493 [Child] Registering asynchronous event callbacks... 00:30:57.493 [Child] Getting orig temperature thresholds of all controllers 00:30:57.493 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:57.493 [Child] Waiting for all controllers to trigger AER and reset threshold 00:30:57.493 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:57.493 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:57.493 [Child] Cleaning up... 00:30:57.493 Asynchronous Event Request test 00:30:57.493 Attached to 0000:00:06.0 00:30:57.493 Reset controller to setup AER completions for this process 00:30:57.493 Registering asynchronous event callbacks... 00:30:57.493 Getting orig temperature thresholds of all controllers 00:30:57.493 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:57.493 Setting all controllers temperature threshold low to trigger AER 00:30:57.493 Waiting for all controllers temperature threshold to be set lower 00:30:57.493 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:57.493 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:30:57.493 Waiting for all controllers to trigger AER and reset threshold 00:30:57.493 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:57.493 Cleaning up... 00:30:57.493 00:30:57.493 real 0m0.628s 00:30:57.493 user 0m0.225s 00:30:57.493 sys 0m0.229s 00:30:57.493 16:46:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:57.493 16:46:34 -- common/autotest_common.sh@10 -- # set +x 00:30:57.493 ************************************ 00:30:57.493 END TEST nvme_multi_aen 00:30:57.493 ************************************ 00:30:57.493 16:46:34 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:30:57.493 16:46:34 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:30:57.493 16:46:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:57.493 16:46:34 -- common/autotest_common.sh@10 -- # set +x 00:30:57.493 ************************************ 00:30:57.493 START TEST nvme_startup 00:30:57.493 ************************************ 00:30:57.493 16:46:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:30:57.751 Initializing NVMe Controllers 00:30:57.751 Attached to 0000:00:06.0 00:30:57.751 Initialization complete. 00:30:57.751 Time used:193867.266 (us). 00:30:57.751 00:30:57.751 real 0m0.279s 00:30:57.751 user 0m0.087s 00:30:57.751 sys 0m0.128s 00:30:57.751 16:46:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:57.751 ************************************ 00:30:57.751 16:46:34 -- common/autotest_common.sh@10 -- # set +x 00:30:57.751 END TEST nvme_startup 00:30:57.751 ************************************ 00:30:58.008 16:46:34 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:30:58.008 16:46:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:58.008 16:46:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:58.008 16:46:34 -- common/autotest_common.sh@10 -- # set +x 00:30:58.008 ************************************ 00:30:58.008 START TEST nvme_multi_secondary 00:30:58.008 ************************************ 00:30:58.008 16:46:34 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:30:58.008 16:46:34 -- nvme/nvme.sh@52 -- # pid0=143883 00:30:58.008 16:46:34 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:30:58.008 16:46:34 -- nvme/nvme.sh@54 -- # pid1=143884 00:30:58.008 16:46:34 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:30:58.008 16:46:34 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:01.287 Initializing NVMe Controllers 00:31:01.287 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:01.287 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:01.287 Initialization complete. Launching workers. 00:31:01.287 ======================================================== 00:31:01.287 Latency(us) 00:31:01.287 Device Information : IOPS MiB/s Average min max 00:31:01.287 PCIE (0000:00:06.0) NSID 1 from core 1: 32337.32 126.32 494.42 145.32 16512.43 00:31:01.287 ======================================================== 00:31:01.287 Total : 32337.32 126.32 494.42 145.32 16512.43 00:31:01.287 00:31:01.287 Initializing NVMe Controllers 00:31:01.287 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:01.287 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:01.287 Initialization complete. Launching workers. 00:31:01.287 ======================================================== 00:31:01.287 Latency(us) 00:31:01.287 Device Information : IOPS MiB/s Average min max 00:31:01.287 PCIE (0000:00:06.0) NSID 1 from core 2: 14584.33 56.97 1096.33 147.21 24791.75 00:31:01.287 ======================================================== 00:31:01.287 Total : 14584.33 56.97 1096.33 147.21 24791.75 00:31:01.287 00:31:01.287 16:46:38 -- nvme/nvme.sh@56 -- # wait 143883 00:31:03.817 Initializing NVMe Controllers 00:31:03.817 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:03.817 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:03.817 Initialization complete. Launching workers. 00:31:03.817 ======================================================== 00:31:03.817 Latency(us) 00:31:03.817 Device Information : IOPS MiB/s Average min max 00:31:03.817 PCIE (0000:00:06.0) NSID 1 from core 0: 43868.57 171.36 364.39 109.93 2409.33 00:31:03.817 ======================================================== 00:31:03.817 Total : 43868.57 171.36 364.39 109.93 2409.33 00:31:03.817 00:31:03.817 16:46:40 -- nvme/nvme.sh@57 -- # wait 143884 00:31:03.817 16:46:40 -- nvme/nvme.sh@61 -- # pid0=143982 00:31:03.817 16:46:40 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:31:03.817 16:46:40 -- nvme/nvme.sh@63 -- # pid1=143983 00:31:03.817 16:46:40 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:31:03.817 16:46:40 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:31:07.099 Initializing NVMe Controllers 00:31:07.099 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:07.099 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:31:07.099 Initialization complete. Launching workers. 00:31:07.099 ======================================================== 00:31:07.099 Latency(us) 00:31:07.099 Device Information : IOPS MiB/s Average min max 00:31:07.099 PCIE (0000:00:06.0) NSID 1 from core 1: 35397.32 138.27 451.63 122.24 2111.23 00:31:07.099 ======================================================== 00:31:07.099 Total : 35397.32 138.27 451.63 122.24 2111.23 00:31:07.099 00:31:07.099 Initializing NVMe Controllers 00:31:07.099 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:07.099 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:31:07.099 Initialization complete. Launching workers. 00:31:07.099 ======================================================== 00:31:07.099 Latency(us) 00:31:07.099 Device Information : IOPS MiB/s Average min max 00:31:07.099 PCIE (0000:00:06.0) NSID 1 from core 0: 35553.45 138.88 449.67 129.67 5853.60 00:31:07.099 ======================================================== 00:31:07.099 Total : 35553.45 138.88 449.67 129.67 5853.60 00:31:07.099 00:31:09.001 Initializing NVMe Controllers 00:31:09.001 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:31:09.001 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:31:09.001 Initialization complete. Launching workers. 00:31:09.001 ======================================================== 00:31:09.001 Latency(us) 00:31:09.001 Device Information : IOPS MiB/s Average min max 00:31:09.001 PCIE (0000:00:06.0) NSID 1 from core 2: 18222.00 71.18 877.42 125.13 24300.26 00:31:09.001 ======================================================== 00:31:09.001 Total : 18222.00 71.18 877.42 125.13 24300.26 00:31:09.001 00:31:09.259 16:46:45 -- nvme/nvme.sh@65 -- # wait 143982 00:31:09.259 16:46:45 -- nvme/nvme.sh@66 -- # wait 143983 00:31:09.259 00:31:09.259 real 0m11.212s 00:31:09.259 user 0m18.604s 00:31:09.259 sys 0m0.807s 00:31:09.259 16:46:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:09.259 ************************************ 00:31:09.259 END TEST nvme_multi_secondary 00:31:09.259 16:46:45 -- common/autotest_common.sh@10 -- # set +x 00:31:09.259 ************************************ 00:31:09.259 16:46:45 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:31:09.259 16:46:45 -- nvme/nvme.sh@102 -- # kill_stub 00:31:09.259 16:46:45 -- common/autotest_common.sh@1065 -- # [[ -e /proc/143167 ]] 00:31:09.259 16:46:45 -- common/autotest_common.sh@1066 -- # kill 143167 00:31:09.259 16:46:45 -- common/autotest_common.sh@1067 -- # wait 143167 00:31:09.825 [2024-07-11 16:46:46.536459] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143817) is not found. Dropping the request. 00:31:09.825 [2024-07-11 16:46:46.536918] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143817) is not found. Dropping the request. 00:31:09.825 [2024-07-11 16:46:46.537128] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143817) is not found. Dropping the request. 00:31:09.825 [2024-07-11 16:46:46.537321] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 143817) is not found. Dropping the request. 00:31:10.083 16:46:46 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:31:10.083 16:46:46 -- common/autotest_common.sh@1073 -- # echo 2 00:31:10.083 16:46:46 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:10.083 16:46:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:10.083 16:46:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:10.083 16:46:46 -- common/autotest_common.sh@10 -- # set +x 00:31:10.083 ************************************ 00:31:10.083 START TEST bdev_nvme_reset_stuck_adm_cmd 00:31:10.083 ************************************ 00:31:10.083 16:46:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:31:10.083 * Looking for test storage... 00:31:10.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:31:10.083 16:46:46 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:10.083 16:46:46 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:10.083 16:46:46 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:10.083 16:46:46 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:10.083 16:46:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:10.083 16:46:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:10.083 16:46:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:10.083 16:46:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:10.083 16:46:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:10.083 16:46:46 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:10.083 16:46:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:10.083 16:46:46 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:31:10.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=144150 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:10.083 16:46:46 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 144150 00:31:10.083 16:46:46 -- common/autotest_common.sh@819 -- # '[' -z 144150 ']' 00:31:10.083 16:46:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.083 16:46:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:10.083 16:46:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.083 16:46:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:10.083 16:46:46 -- common/autotest_common.sh@10 -- # set +x 00:31:10.341 [2024-07-11 16:46:46.942369] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:10.341 [2024-07-11 16:46:46.942553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144150 ] 00:31:10.341 [2024-07-11 16:46:47.130256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:10.641 [2024-07-11 16:46:47.360990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:10.641 [2024-07-11 16:46:47.361271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.641 [2024-07-11 16:46:47.361381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:10.641 [2024-07-11 16:46:47.361492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:10.641 [2024-07-11 16:46:47.361495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.023 16:46:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:12.023 16:46:48 -- common/autotest_common.sh@852 -- # return 0 00:31:12.023 16:46:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:31:12.023 16:46:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.023 16:46:48 -- common/autotest_common.sh@10 -- # set +x 00:31:12.023 nvme0n1 00:31:12.023 16:46:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.023 16:46:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:31:12.023 16:46:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_68clH.txt 00:31:12.023 16:46:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:31:12.023 16:46:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.023 16:46:48 -- common/autotest_common.sh@10 -- # set +x 00:31:12.023 true 00:31:12.023 16:46:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.023 16:46:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:31:12.023 16:46:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720716408 00:31:12.023 16:46:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=144192 00:31:12.023 16:46:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:12.023 16:46:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:31:12.023 16:46:48 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:13.927 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:13.927 16:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:13.927 16:46:50 -- common/autotest_common.sh@10 -- # set +x 00:31:13.927 [2024-07-11 16:46:50.719039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:13.927 [2024-07-11 16:46:50.719517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.927 [2024-07-11 16:46:50.719624] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:13.927 [2024-07-11 16:46:50.719656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.927 [2024-07-11 16:46:50.721575] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:13.927 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 144192 00:31:13.927 16:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:13.927 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 144192 00:31:13.927 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 144192 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:14.186 16:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:14.186 16:46:50 -- common/autotest_common.sh@10 -- # set +x 00:31:14.186 16:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_68clH.txt 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_68clH.txt 00:31:14.186 16:46:50 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 144150 00:31:14.186 16:46:50 -- common/autotest_common.sh@926 -- # '[' -z 144150 ']' 00:31:14.186 16:46:50 -- common/autotest_common.sh@930 -- # kill -0 144150 00:31:14.186 16:46:50 -- common/autotest_common.sh@931 -- # uname 00:31:14.186 16:46:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:14.186 16:46:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144150 00:31:14.186 16:46:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:14.186 killing process with pid 144150 00:31:14.186 16:46:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:14.186 16:46:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144150' 00:31:14.186 16:46:50 -- common/autotest_common.sh@945 -- # kill 144150 00:31:14.186 16:46:50 -- common/autotest_common.sh@950 -- # wait 144150 00:31:16.090 16:46:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:16.090 16:46:52 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:16.090 00:31:16.090 real 0m5.969s 00:31:16.090 user 0m21.481s 00:31:16.090 sys 0m0.597s 00:31:16.090 ************************************ 00:31:16.090 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:16.090 16:46:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:16.090 16:46:52 -- common/autotest_common.sh@10 -- # set +x 00:31:16.090 ************************************ 00:31:16.090 16:46:52 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:16.090 16:46:52 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:16.090 16:46:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:16.090 16:46:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:16.090 16:46:52 -- common/autotest_common.sh@10 -- # set +x 00:31:16.090 ************************************ 00:31:16.090 START TEST nvme_fio 00:31:16.090 ************************************ 00:31:16.090 16:46:52 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:31:16.090 16:46:52 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:16.090 16:46:52 -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:16.090 16:46:52 -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:31:16.090 16:46:52 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:16.090 16:46:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:16.090 16:46:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:16.090 16:46:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:16.090 16:46:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:16.090 16:46:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:16.090 16:46:52 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:16.090 16:46:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:16.090 16:46:52 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:16.090 16:46:52 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:16.090 16:46:52 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:16.090 16:46:52 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:16.354 16:46:53 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:16.354 16:46:53 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:16.614 16:46:53 -- nvme/nvme.sh@41 -- # bs=4096 00:31:16.614 16:46:53 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:16.614 16:46:53 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:16.614 16:46:53 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:16.614 16:46:53 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:31:16.614 16:46:53 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:16.614 16:46:53 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:16.614 16:46:53 -- common/autotest_common.sh@1320 -- # shift 00:31:16.614 16:46:53 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:16.614 16:46:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.614 16:46:53 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:16.614 16:46:53 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:16.614 16:46:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:16.614 16:46:53 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:31:16.614 16:46:53 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:31:16.614 16:46:53 -- common/autotest_common.sh@1326 -- # break 00:31:16.614 16:46:53 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:16.614 16:46:53 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:16.871 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:16.871 fio-3.35 00:31:16.871 Starting 1 thread 00:31:20.153 00:31:20.153 test: (groupid=0, jobs=1): err= 0: pid=144338: Thu Jul 11 16:46:56 2024 00:31:20.153 read: IOPS=15.8k, BW=61.5MiB/s (64.5MB/s)(123MiB/2001msec) 00:31:20.153 slat (nsec): min=3922, max=68893, avg=5899.05, stdev=3352.42 00:31:20.153 clat (usec): min=402, max=10104, avg=4038.96, stdev=455.49 00:31:20.153 lat (usec): min=408, max=10172, avg=4044.86, stdev=455.93 00:31:20.153 clat percentiles (usec): 00:31:20.153 | 1.00th=[ 3261], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3720], 00:31:20.153 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4113], 00:31:20.153 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4621], 00:31:20.153 | 99.00th=[ 5014], 99.50th=[ 6456], 99.90th=[ 8291], 99.95th=[ 9503], 00:31:20.153 | 99.99th=[10028] 00:31:20.153 bw ( KiB/s): min=59872, max=68240, per=100.00%, avg=63186.67, stdev=4446.69, samples=3 00:31:20.153 iops : min=14968, max=17060, avg=15796.67, stdev=1111.67, samples=3 00:31:20.153 write: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(123MiB/2001msec); 0 zone resets 00:31:20.153 slat (nsec): min=4007, max=46270, avg=6046.11, stdev=3448.52 00:31:20.153 clat (usec): min=296, max=10034, avg=4054.11, stdev=456.63 00:31:20.153 lat (usec): min=302, max=10054, avg=4060.16, stdev=456.99 00:31:20.153 clat percentiles (usec): 00:31:20.153 | 1.00th=[ 3294], 5.00th=[ 3490], 10.00th=[ 3589], 20.00th=[ 3720], 00:31:20.153 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4113], 00:31:20.153 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:31:20.153 | 99.00th=[ 5014], 99.50th=[ 6325], 99.90th=[ 8979], 99.95th=[ 9503], 00:31:20.153 | 99.99th=[ 9896] 00:31:20.153 bw ( KiB/s): min=59352, max=67552, per=99.72%, avg=62890.67, stdev=4213.70, samples=3 00:31:20.153 iops : min=14838, max=16888, avg=15722.67, stdev=1053.43, samples=3 00:31:20.153 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:20.153 lat (msec) : 2=0.04%, 4=46.20%, 10=53.72%, 20=0.01% 00:31:20.153 cpu : usr=99.95%, sys=0.00%, ctx=10, majf=0, minf=36 00:31:20.153 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:20.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:20.153 issued rwts: total=31520,31548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:20.153 00:31:20.153 Run status group 0 (all jobs): 00:31:20.153 READ: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=123MiB (129MB), run=2001-2001msec 00:31:20.153 WRITE: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=123MiB (129MB), run=2001-2001msec 00:31:20.154 ----------------------------------------------------- 00:31:20.154 Suppressions used: 00:31:20.154 count bytes template 00:31:20.154 1 32 /usr/src/fio/parse.c 00:31:20.154 ----------------------------------------------------- 00:31:20.154 00:31:20.412 16:46:56 -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:20.412 16:46:56 -- nvme/nvme.sh@46 -- # true 00:31:20.412 ************************************ 00:31:20.412 END TEST nvme_fio 00:31:20.412 ************************************ 00:31:20.412 00:31:20.412 real 0m4.202s 00:31:20.412 user 0m3.504s 00:31:20.412 sys 0m0.360s 00:31:20.412 16:46:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:20.412 16:46:56 -- common/autotest_common.sh@10 -- # set +x 00:31:20.412 00:31:20.412 real 0m48.043s 00:31:20.412 user 2m8.176s 00:31:20.412 sys 0m7.774s 00:31:20.412 16:46:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:20.412 16:46:57 -- common/autotest_common.sh@10 -- # set +x 00:31:20.412 ************************************ 00:31:20.412 END TEST nvme 00:31:20.412 ************************************ 00:31:20.412 16:46:57 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:31:20.412 16:46:57 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:20.412 16:46:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:20.412 16:46:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:20.412 16:46:57 -- common/autotest_common.sh@10 -- # set +x 00:31:20.412 ************************************ 00:31:20.412 START TEST nvme_scc 00:31:20.412 ************************************ 00:31:20.412 16:46:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:20.412 * Looking for test storage... 00:31:20.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:20.412 16:46:57 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:20.412 16:46:57 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:20.412 16:46:57 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:31:20.412 16:46:57 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:20.412 16:46:57 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:20.412 16:46:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.412 16:46:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.412 16:46:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.412 16:46:57 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:20.412 16:46:57 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:20.412 16:46:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:20.412 16:46:57 -- paths/export.sh@5 -- # export PATH 00:31:20.412 16:46:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:20.412 16:46:57 -- nvme/functions.sh@10 -- # ctrls=() 00:31:20.412 16:46:57 -- nvme/functions.sh@10 -- # declare -A ctrls 00:31:20.412 16:46:57 -- nvme/functions.sh@11 -- # nvmes=() 00:31:20.412 16:46:57 -- nvme/functions.sh@11 -- # declare -A nvmes 00:31:20.412 16:46:57 -- nvme/functions.sh@12 -- # bdfs=() 00:31:20.412 16:46:57 -- nvme/functions.sh@12 -- # declare -A bdfs 00:31:20.412 16:46:57 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:31:20.412 16:46:57 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:31:20.412 16:46:57 -- nvme/functions.sh@14 -- # nvme_name= 00:31:20.412 16:46:57 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:20.412 16:46:57 -- nvme/nvme_scc.sh@12 -- # uname 00:31:20.412 16:46:57 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:31:20.412 16:46:57 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:31:20.412 16:46:57 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:20.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:20.671 Waiting for block devices as requested 00:31:20.933 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:20.933 16:46:57 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:31:20.933 16:46:57 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:31:20.933 16:46:57 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:20.933 16:46:57 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:31:20.933 16:46:57 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:31:20.933 16:46:57 -- scripts/common.sh@15 -- # local i 00:31:20.933 16:46:57 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:31:20.933 16:46:57 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:20.933 16:46:57 -- scripts/common.sh@24 -- # return 0 00:31:20.933 16:46:57 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:31:20.933 16:46:57 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:31:20.933 16:46:57 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@18 -- # shift 00:31:20.933 16:46:57 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.933 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:31:20.933 16:46:57 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.933 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.934 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.934 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.934 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:31:20.935 16:46:57 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.935 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:31:20.935 16:46:57 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:20.935 16:46:57 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:31:20.935 16:46:57 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:31:20.935 16:46:57 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:31:20.935 16:46:57 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:31:20.935 16:46:57 -- nvme/functions.sh@18 -- # shift 00:31:20.936 16:46:57 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.936 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:31:20.936 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.936 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.937 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.937 16:46:57 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.937 16:46:57 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.937 16:46:57 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.937 16:46:57 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.937 16:46:57 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.937 16:46:57 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.937 16:46:57 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.937 16:46:57 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.937 16:46:57 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:20.937 16:46:57 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # IFS=: 00:31:20.937 16:46:57 -- nvme/functions.sh@21 -- # read -r reg val 00:31:20.937 16:46:57 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:31:20.937 16:46:57 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:31:20.937 16:46:57 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:31:20.937 16:46:57 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:31:20.937 16:46:57 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:31:20.937 16:46:57 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:31:20.937 16:46:57 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:31:20.937 16:46:57 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:31:20.937 16:46:57 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:31:20.937 16:46:57 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:31:20.937 16:46:57 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:31:20.937 16:46:57 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:31:20.937 16:46:57 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:31:20.937 16:46:57 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:31:20.937 16:46:57 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:31:20.937 16:46:57 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:31:20.937 16:46:57 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:31:20.937 16:46:57 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:31:20.937 16:46:57 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:31:20.937 16:46:57 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:31:20.937 16:46:57 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:31:20.937 16:46:57 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:31:20.937 16:46:57 -- nvme/functions.sh@76 -- # echo 0x15d 00:31:20.937 16:46:57 -- nvme/functions.sh@184 -- # oncs=0x15d 00:31:20.937 16:46:57 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:31:20.937 16:46:57 -- nvme/functions.sh@197 -- # echo nvme0 00:31:20.937 16:46:57 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:31:20.937 16:46:57 -- nvme/functions.sh@206 -- # echo nvme0 00:31:20.937 16:46:57 -- nvme/functions.sh@207 -- # return 0 00:31:20.937 16:46:57 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:31:20.937 16:46:57 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:31:20.937 16:46:57 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:21.197 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:31:21.456 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:22.393 16:46:59 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:22.393 16:46:59 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:31:22.393 16:46:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:22.393 16:46:59 -- common/autotest_common.sh@10 -- # set +x 00:31:22.393 ************************************ 00:31:22.393 START TEST nvme_simple_copy 00:31:22.393 ************************************ 00:31:22.393 16:46:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:22.653 Initializing NVMe Controllers 00:31:22.653 Attaching to 0000:00:06.0 00:31:22.653 Controller supports SCC. Attached to 0000:00:06.0 00:31:22.653 Namespace ID: 1 size: 5GB 00:31:22.653 Initialization complete. 00:31:22.653 00:31:22.653 Controller QEMU NVMe Ctrl (12340 ) 00:31:22.653 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:31:22.653 Namespace Block Size:4096 00:31:22.653 Writing LBAs 0 to 63 with Random Data 00:31:22.653 Copied LBAs from 0 - 63 to the Destination LBA 256 00:31:22.653 LBAs matching Written Data: 64 00:31:22.653 00:31:22.653 real 0m0.287s 00:31:22.653 user 0m0.100s 00:31:22.653 sys 0m0.089s 00:31:22.653 16:46:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:22.653 16:46:59 -- common/autotest_common.sh@10 -- # set +x 00:31:22.653 ************************************ 00:31:22.653 END TEST nvme_simple_copy 00:31:22.654 ************************************ 00:31:22.654 00:31:22.654 real 0m2.390s 00:31:22.654 user 0m0.688s 00:31:22.654 sys 0m1.577s 00:31:22.654 16:46:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:22.654 16:46:59 -- common/autotest_common.sh@10 -- # set +x 00:31:22.654 ************************************ 00:31:22.654 END TEST nvme_scc 00:31:22.654 ************************************ 00:31:22.913 16:46:59 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:31:22.913 16:46:59 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:31:22.913 16:46:59 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:31:22.913 16:46:59 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:31:22.913 16:46:59 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:31:22.913 16:46:59 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:22.913 16:46:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:22.913 16:46:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:22.913 16:46:59 -- common/autotest_common.sh@10 -- # set +x 00:31:22.913 ************************************ 00:31:22.913 START TEST nvme_rpc 00:31:22.913 ************************************ 00:31:22.913 16:46:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:22.913 * Looking for test storage... 00:31:22.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:22.913 16:46:59 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:22.913 16:46:59 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:31:22.913 16:46:59 -- common/autotest_common.sh@1509 -- # bdfs=() 00:31:22.913 16:46:59 -- common/autotest_common.sh@1509 -- # local bdfs 00:31:22.913 16:46:59 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:31:22.913 16:46:59 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:31:22.913 16:46:59 -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:22.913 16:46:59 -- common/autotest_common.sh@1498 -- # local bdfs 00:31:22.913 16:46:59 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:22.913 16:46:59 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:22.913 16:46:59 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:22.913 16:46:59 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:22.913 16:46:59 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 00:31:22.913 16:46:59 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:31:22.913 16:46:59 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:31:22.913 16:46:59 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=144843 00:31:22.913 16:46:59 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:22.913 16:46:59 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:31:22.913 16:46:59 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 144843 00:31:22.913 16:46:59 -- common/autotest_common.sh@819 -- # '[' -z 144843 ']' 00:31:22.913 16:46:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.913 16:46:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:22.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.913 16:46:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.913 16:46:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:22.913 16:46:59 -- common/autotest_common.sh@10 -- # set +x 00:31:22.913 [2024-07-11 16:46:59.710087] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:22.913 [2024-07-11 16:46:59.710911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144843 ] 00:31:23.173 [2024-07-11 16:46:59.879829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:23.431 [2024-07-11 16:47:00.063255] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:23.431 [2024-07-11 16:47:00.063764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.431 [2024-07-11 16:47:00.063770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.809 16:47:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:24.809 16:47:01 -- common/autotest_common.sh@852 -- # return 0 00:31:24.809 16:47:01 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:31:25.068 Nvme0n1 00:31:25.068 16:47:01 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:31:25.068 16:47:01 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:31:25.068 request: 00:31:25.068 { 00:31:25.068 "filename": "non_existing_file", 00:31:25.068 "bdev_name": "Nvme0n1", 00:31:25.068 "method": "bdev_nvme_apply_firmware", 00:31:25.068 "req_id": 1 00:31:25.068 } 00:31:25.068 Got JSON-RPC error response 00:31:25.068 response: 00:31:25.068 { 00:31:25.068 "code": -32603, 00:31:25.068 "message": "open file failed." 00:31:25.068 } 00:31:25.327 16:47:01 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:31:25.327 16:47:01 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:31:25.327 16:47:01 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:25.327 16:47:02 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:31:25.327 16:47:02 -- nvme/nvme_rpc.sh@40 -- # killprocess 144843 00:31:25.327 16:47:02 -- common/autotest_common.sh@926 -- # '[' -z 144843 ']' 00:31:25.327 16:47:02 -- common/autotest_common.sh@930 -- # kill -0 144843 00:31:25.327 16:47:02 -- common/autotest_common.sh@931 -- # uname 00:31:25.327 16:47:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:25.327 16:47:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144843 00:31:25.327 16:47:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:25.327 16:47:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:25.327 16:47:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144843' 00:31:25.327 killing process with pid 144843 00:31:25.327 16:47:02 -- common/autotest_common.sh@945 -- # kill 144843 00:31:25.327 16:47:02 -- common/autotest_common.sh@950 -- # wait 144843 00:31:27.229 00:31:27.229 real 0m4.254s 00:31:27.229 user 0m8.273s 00:31:27.229 sys 0m0.627s 00:31:27.229 16:47:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:27.229 ************************************ 00:31:27.229 END TEST nvme_rpc 00:31:27.229 ************************************ 00:31:27.229 16:47:03 -- common/autotest_common.sh@10 -- # set +x 00:31:27.229 16:47:03 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:27.229 16:47:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:27.229 16:47:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:27.229 16:47:03 -- common/autotest_common.sh@10 -- # set +x 00:31:27.229 ************************************ 00:31:27.229 START TEST nvme_rpc_timeouts 00:31:27.229 ************************************ 00:31:27.229 16:47:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:27.229 * Looking for test storage... 00:31:27.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:27.229 16:47:03 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:27.229 16:47:03 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_144926 00:31:27.229 16:47:03 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_144926 00:31:27.229 16:47:03 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=144950 00:31:27.229 16:47:03 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:27.229 16:47:03 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:31:27.229 16:47:03 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 144950 00:31:27.229 16:47:03 -- common/autotest_common.sh@819 -- # '[' -z 144950 ']' 00:31:27.229 16:47:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.229 16:47:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:27.229 16:47:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.229 16:47:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:27.229 16:47:03 -- common/autotest_common.sh@10 -- # set +x 00:31:27.229 [2024-07-11 16:47:03.932588] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:27.229 [2024-07-11 16:47:03.932769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144950 ] 00:31:27.487 [2024-07-11 16:47:04.097572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:27.487 [2024-07-11 16:47:04.268775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:27.487 [2024-07-11 16:47:04.269336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.487 [2024-07-11 16:47:04.269330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:28.862 Checking default timeout settings: 00:31:28.862 16:47:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:28.862 16:47:05 -- common/autotest_common.sh@852 -- # return 0 00:31:28.862 16:47:05 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:31:28.862 16:47:05 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:29.120 Making settings changes with rpc: 00:31:29.120 16:47:05 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:31:29.120 16:47:05 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:31:29.377 Check default vs. modified settings: 00:31:29.377 16:47:06 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:31:29.377 16:47:06 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_144926 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_144926 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:29.635 Setting action_on_timeout is changed as expected. 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_144926 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:29.635 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_144926 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:29.636 Setting timeout_us is changed as expected. 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_144926 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_144926 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:29.636 Setting timeout_admin_us is changed as expected. 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_144926 /tmp/settings_modified_144926 00:31:29.636 16:47:06 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 144950 00:31:29.636 16:47:06 -- common/autotest_common.sh@926 -- # '[' -z 144950 ']' 00:31:29.636 16:47:06 -- common/autotest_common.sh@930 -- # kill -0 144950 00:31:29.636 16:47:06 -- common/autotest_common.sh@931 -- # uname 00:31:29.636 16:47:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:29.636 16:47:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 144950 00:31:29.636 killing process with pid 144950 00:31:29.636 16:47:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:29.636 16:47:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:29.636 16:47:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 144950' 00:31:29.636 16:47:06 -- common/autotest_common.sh@945 -- # kill 144950 00:31:29.636 16:47:06 -- common/autotest_common.sh@950 -- # wait 144950 00:31:31.535 RPC TIMEOUT SETTING TEST PASSED. 00:31:31.535 16:47:08 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:31:31.535 00:31:31.535 real 0m4.334s 00:31:31.535 user 0m8.458s 00:31:31.535 sys 0m0.599s 00:31:31.535 ************************************ 00:31:31.535 END TEST nvme_rpc_timeouts 00:31:31.535 ************************************ 00:31:31.536 16:47:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:31.536 16:47:08 -- common/autotest_common.sh@10 -- # set +x 00:31:31.536 16:47:08 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:31:31.536 16:47:08 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@268 -- # timing_exit lib 00:31:31.536 16:47:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:31.536 16:47:08 -- common/autotest_common.sh@10 -- # set +x 00:31:31.536 16:47:08 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:31.536 16:47:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:31.536 16:47:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:31.536 16:47:08 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:31.536 16:47:08 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:31:31.536 16:47:08 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:31.536 16:47:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:31.536 16:47:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:31.536 16:47:08 -- common/autotest_common.sh@10 -- # set +x 00:31:31.536 ************************************ 00:31:31.536 START TEST blockdev_raid5f 00:31:31.536 ************************************ 00:31:31.536 16:47:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:31.536 * Looking for test storage... 00:31:31.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:31.536 16:47:08 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:31.536 16:47:08 -- bdev/nbd_common.sh@6 -- # set -e 00:31:31.536 16:47:08 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:31.536 16:47:08 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:31.536 16:47:08 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:31.536 16:47:08 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:31.536 16:47:08 -- bdev/blockdev.sh@18 -- # : 00:31:31.536 16:47:08 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:31:31.536 16:47:08 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:31:31.536 16:47:08 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:31:31.536 16:47:08 -- bdev/blockdev.sh@672 -- # uname -s 00:31:31.536 16:47:08 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:31:31.536 16:47:08 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:31:31.536 16:47:08 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:31:31.536 16:47:08 -- bdev/blockdev.sh@681 -- # crypto_device= 00:31:31.536 16:47:08 -- bdev/blockdev.sh@682 -- # dek= 00:31:31.536 16:47:08 -- bdev/blockdev.sh@683 -- # env_ctx= 00:31:31.536 16:47:08 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:31:31.536 16:47:08 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:31:31.536 16:47:08 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:31:31.536 16:47:08 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:31:31.536 16:47:08 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:31:31.536 16:47:08 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=145131 00:31:31.536 16:47:08 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:31.536 16:47:08 -- bdev/blockdev.sh@47 -- # waitforlisten 145131 00:31:31.536 16:47:08 -- common/autotest_common.sh@819 -- # '[' -z 145131 ']' 00:31:31.536 16:47:08 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:31.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.536 16:47:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.536 16:47:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:31.536 16:47:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.536 16:47:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:31.536 16:47:08 -- common/autotest_common.sh@10 -- # set +x 00:31:31.794 [2024-07-11 16:47:08.364518] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:31.794 [2024-07-11 16:47:08.364708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145131 ] 00:31:31.794 [2024-07-11 16:47:08.530262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.053 [2024-07-11 16:47:08.695702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:32.053 [2024-07-11 16:47:08.695962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.428 16:47:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:33.428 16:47:09 -- common/autotest_common.sh@852 -- # return 0 00:31:33.428 16:47:09 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:31:33.428 16:47:09 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:31:33.428 16:47:09 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:31:33.428 16:47:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.428 16:47:09 -- common/autotest_common.sh@10 -- # set +x 00:31:33.428 Malloc0 00:31:33.428 Malloc1 00:31:33.428 Malloc2 00:31:33.428 16:47:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.428 16:47:10 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:31:33.428 16:47:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.428 16:47:10 -- common/autotest_common.sh@10 -- # set +x 00:31:33.428 16:47:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.428 16:47:10 -- bdev/blockdev.sh@738 -- # cat 00:31:33.428 16:47:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:31:33.428 16:47:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.428 16:47:10 -- common/autotest_common.sh@10 -- # set +x 00:31:33.428 16:47:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.428 16:47:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:31:33.428 16:47:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.428 16:47:10 -- common/autotest_common.sh@10 -- # set +x 00:31:33.428 16:47:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.428 16:47:10 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:33.428 16:47:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.428 16:47:10 -- common/autotest_common.sh@10 -- # set +x 00:31:33.428 16:47:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.428 16:47:10 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:31:33.428 16:47:10 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:31:33.428 16:47:10 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:31:33.428 16:47:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:33.428 16:47:10 -- common/autotest_common.sh@10 -- # set +x 00:31:33.428 16:47:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:33.428 16:47:10 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:31:33.428 16:47:10 -- bdev/blockdev.sh@747 -- # jq -r .name 00:31:33.429 16:47:10 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fec0e467-7a67-476b-8e72-aa302cca1675"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fec0e467-7a67-476b-8e72-aa302cca1675",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fec0e467-7a67-476b-8e72-aa302cca1675",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "dcd522c3-14e6-40dd-9391-6a208b180ee4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ab9d9492-e855-4e3d-ae16-550ac3b964ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c2a02a48-ad96-4627-a722-cd9ff750f7e3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:33.429 16:47:10 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:31:33.429 16:47:10 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:31:33.429 16:47:10 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:31:33.429 16:47:10 -- bdev/blockdev.sh@752 -- # killprocess 145131 00:31:33.429 16:47:10 -- common/autotest_common.sh@926 -- # '[' -z 145131 ']' 00:31:33.429 16:47:10 -- common/autotest_common.sh@930 -- # kill -0 145131 00:31:33.429 16:47:10 -- common/autotest_common.sh@931 -- # uname 00:31:33.429 16:47:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:33.429 16:47:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145131 00:31:33.429 16:47:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:33.429 16:47:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:33.429 killing process with pid 145131 00:31:33.429 16:47:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145131' 00:31:33.429 16:47:10 -- common/autotest_common.sh@945 -- # kill 145131 00:31:33.429 16:47:10 -- common/autotest_common.sh@950 -- # wait 145131 00:31:35.959 16:47:12 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:35.959 16:47:12 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:35.959 16:47:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:31:35.959 16:47:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:35.959 16:47:12 -- common/autotest_common.sh@10 -- # set +x 00:31:35.959 ************************************ 00:31:35.959 START TEST bdev_hello_world 00:31:35.959 ************************************ 00:31:35.959 16:47:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:35.959 [2024-07-11 16:47:12.253686] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:35.959 [2024-07-11 16:47:12.253878] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145205 ] 00:31:35.959 [2024-07-11 16:47:12.421608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.959 [2024-07-11 16:47:12.579959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:36.216 [2024-07-11 16:47:13.022518] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:36.216 [2024-07-11 16:47:13.022612] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:31:36.216 [2024-07-11 16:47:13.022658] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:36.216 [2024-07-11 16:47:13.023228] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:36.216 [2024-07-11 16:47:13.023424] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:36.216 [2024-07-11 16:47:13.023460] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:36.216 [2024-07-11 16:47:13.023568] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:36.216 00:31:36.216 [2024-07-11 16:47:13.023611] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:37.592 00:31:37.592 real 0m1.914s 00:31:37.592 user 0m1.554s 00:31:37.592 sys 0m0.240s 00:31:37.592 16:47:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:37.592 ************************************ 00:31:37.592 END TEST bdev_hello_world 00:31:37.592 ************************************ 00:31:37.592 16:47:14 -- common/autotest_common.sh@10 -- # set +x 00:31:37.592 16:47:14 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:31:37.592 16:47:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:37.592 16:47:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:37.592 16:47:14 -- common/autotest_common.sh@10 -- # set +x 00:31:37.592 ************************************ 00:31:37.592 START TEST bdev_bounds 00:31:37.592 ************************************ 00:31:37.592 16:47:14 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:31:37.592 16:47:14 -- bdev/blockdev.sh@288 -- # bdevio_pid=145251 00:31:37.592 16:47:14 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:37.592 Process bdevio pid: 145251 00:31:37.592 16:47:14 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 145251' 00:31:37.592 16:47:14 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:37.592 16:47:14 -- bdev/blockdev.sh@291 -- # waitforlisten 145251 00:31:37.592 16:47:14 -- common/autotest_common.sh@819 -- # '[' -z 145251 ']' 00:31:37.592 16:47:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:37.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:37.592 16:47:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:37.592 16:47:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:37.592 16:47:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:37.592 16:47:14 -- common/autotest_common.sh@10 -- # set +x 00:31:37.592 [2024-07-11 16:47:14.230278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:37.592 [2024-07-11 16:47:14.231087] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145251 ] 00:31:37.850 [2024-07-11 16:47:14.405317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:37.851 [2024-07-11 16:47:14.584837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:37.851 [2024-07-11 16:47:14.585006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:37.851 [2024-07-11 16:47:14.584999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:38.418 16:47:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:38.418 16:47:15 -- common/autotest_common.sh@852 -- # return 0 00:31:38.418 16:47:15 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:38.676 I/O targets: 00:31:38.676 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:31:38.676 00:31:38.676 00:31:38.676 CUnit - A unit testing framework for C - Version 2.1-3 00:31:38.676 http://cunit.sourceforge.net/ 00:31:38.676 00:31:38.676 00:31:38.676 Suite: bdevio tests on: raid5f 00:31:38.676 Test: blockdev write read block ...passed 00:31:38.676 Test: blockdev write zeroes read block ...passed 00:31:38.676 Test: blockdev write zeroes read no split ...passed 00:31:38.676 Test: blockdev write zeroes read split ...passed 00:31:38.676 Test: blockdev write zeroes read split partial ...passed 00:31:38.676 Test: blockdev reset ...passed 00:31:38.676 Test: blockdev write read 8 blocks ...passed 00:31:38.676 Test: blockdev write read size > 128k ...passed 00:31:38.676 Test: blockdev write read invalid size ...passed 00:31:38.676 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:38.676 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:38.676 Test: blockdev write read max offset ...passed 00:31:38.677 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:38.677 Test: blockdev writev readv 8 blocks ...passed 00:31:38.677 Test: blockdev writev readv 30 x 1block ...passed 00:31:38.677 Test: blockdev writev readv block ...passed 00:31:38.677 Test: blockdev writev readv size > 128k ...passed 00:31:38.677 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:38.677 Test: blockdev comparev and writev ...passed 00:31:38.677 Test: blockdev nvme passthru rw ...passed 00:31:38.677 Test: blockdev nvme passthru vendor specific ...passed 00:31:38.677 Test: blockdev nvme admin passthru ...passed 00:31:38.677 Test: blockdev copy ...passed 00:31:38.677 00:31:38.677 Run Summary: Type Total Ran Passed Failed Inactive 00:31:38.677 suites 1 1 n/a 0 0 00:31:38.677 tests 23 23 23 0 0 00:31:38.677 asserts 130 130 130 0 n/a 00:31:38.677 00:31:38.677 Elapsed time = 0.419 seconds 00:31:38.677 0 00:31:38.677 16:47:15 -- bdev/blockdev.sh@293 -- # killprocess 145251 00:31:38.677 16:47:15 -- common/autotest_common.sh@926 -- # '[' -z 145251 ']' 00:31:38.677 16:47:15 -- common/autotest_common.sh@930 -- # kill -0 145251 00:31:38.677 16:47:15 -- common/autotest_common.sh@931 -- # uname 00:31:38.677 16:47:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:38.677 16:47:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145251 00:31:38.677 16:47:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:38.677 16:47:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:38.677 16:47:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145251' 00:31:38.677 killing process with pid 145251 00:31:38.677 16:47:15 -- common/autotest_common.sh@945 -- # kill 145251 00:31:38.677 16:47:15 -- common/autotest_common.sh@950 -- # wait 145251 00:31:40.054 16:47:16 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:31:40.054 00:31:40.054 real 0m2.492s 00:31:40.054 user 0m5.987s 00:31:40.054 sys 0m0.296s 00:31:40.054 16:47:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:40.054 ************************************ 00:31:40.054 END TEST bdev_bounds 00:31:40.054 ************************************ 00:31:40.054 16:47:16 -- common/autotest_common.sh@10 -- # set +x 00:31:40.054 16:47:16 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:40.054 16:47:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:31:40.054 16:47:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:40.054 16:47:16 -- common/autotest_common.sh@10 -- # set +x 00:31:40.054 ************************************ 00:31:40.054 START TEST bdev_nbd 00:31:40.054 ************************************ 00:31:40.054 16:47:16 -- common/autotest_common.sh@1104 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:40.054 16:47:16 -- bdev/blockdev.sh@298 -- # uname -s 00:31:40.054 16:47:16 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:31:40.054 16:47:16 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:40.054 16:47:16 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:40.054 16:47:16 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:31:40.054 16:47:16 -- bdev/blockdev.sh@302 -- # local bdev_all 00:31:40.054 16:47:16 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:31:40.054 16:47:16 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:31:40.054 16:47:16 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:31:40.054 16:47:16 -- bdev/blockdev.sh@309 -- # local nbd_all 00:31:40.054 16:47:16 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:31:40.054 16:47:16 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:31:40.054 16:47:16 -- bdev/blockdev.sh@312 -- # local nbd_list 00:31:40.054 16:47:16 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:31:40.054 16:47:16 -- bdev/blockdev.sh@313 -- # local bdev_list 00:31:40.054 16:47:16 -- bdev/blockdev.sh@316 -- # nbd_pid=145339 00:31:40.054 16:47:16 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:40.054 16:47:16 -- bdev/blockdev.sh@318 -- # waitforlisten 145339 /var/tmp/spdk-nbd.sock 00:31:40.054 16:47:16 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:40.054 16:47:16 -- common/autotest_common.sh@819 -- # '[' -z 145339 ']' 00:31:40.054 16:47:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:40.054 16:47:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:40.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:40.054 16:47:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:40.054 16:47:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:40.054 16:47:16 -- common/autotest_common.sh@10 -- # set +x 00:31:40.054 [2024-07-11 16:47:16.778353] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:40.054 [2024-07-11 16:47:16.778543] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.313 [2024-07-11 16:47:16.946847] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.571 [2024-07-11 16:47:17.138329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.201 16:47:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:41.201 16:47:17 -- common/autotest_common.sh@852 -- # return 0 00:31:41.201 16:47:17 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@24 -- # local i 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:41.201 16:47:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:41.201 16:47:17 -- common/autotest_common.sh@857 -- # local i 00:31:41.201 16:47:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:41.201 16:47:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:41.201 16:47:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:41.201 16:47:17 -- common/autotest_common.sh@861 -- # break 00:31:41.201 16:47:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:41.201 16:47:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:41.201 16:47:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:41.201 1+0 records in 00:31:41.201 1+0 records out 00:31:41.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359029 s, 11.4 MB/s 00:31:41.201 16:47:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:41.201 16:47:17 -- common/autotest_common.sh@874 -- # size=4096 00:31:41.201 16:47:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:41.201 16:47:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:41.201 16:47:17 -- common/autotest_common.sh@877 -- # return 0 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:41.201 16:47:17 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:41.461 16:47:18 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:41.461 { 00:31:41.461 "nbd_device": "/dev/nbd0", 00:31:41.461 "bdev_name": "raid5f" 00:31:41.461 } 00:31:41.461 ]' 00:31:41.461 16:47:18 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:41.461 16:47:18 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:41.461 { 00:31:41.461 "nbd_device": "/dev/nbd0", 00:31:41.461 "bdev_name": "raid5f" 00:31:41.461 } 00:31:41.461 ]' 00:31:41.461 16:47:18 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:41.461 16:47:18 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:41.461 16:47:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:41.461 16:47:18 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:41.461 16:47:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:41.461 16:47:18 -- bdev/nbd_common.sh@51 -- # local i 00:31:41.461 16:47:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:41.461 16:47:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@41 -- # break 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@45 -- # return 0 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:41.717 16:47:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@65 -- # true 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@65 -- # count=0 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@122 -- # count=0 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@127 -- # return 0 00:31:41.975 16:47:18 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@12 -- # local i 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:41.975 16:47:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:31:42.232 /dev/nbd0 00:31:42.232 16:47:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:42.232 16:47:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:42.232 16:47:18 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:31:42.232 16:47:18 -- common/autotest_common.sh@857 -- # local i 00:31:42.232 16:47:18 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:31:42.232 16:47:18 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:31:42.232 16:47:18 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:31:42.232 16:47:18 -- common/autotest_common.sh@861 -- # break 00:31:42.232 16:47:18 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:31:42.232 16:47:18 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:31:42.232 16:47:18 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:42.232 1+0 records in 00:31:42.232 1+0 records out 00:31:42.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0014656 s, 2.8 MB/s 00:31:42.232 16:47:18 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:42.232 16:47:18 -- common/autotest_common.sh@874 -- # size=4096 00:31:42.232 16:47:18 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:42.232 16:47:18 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:31:42.232 16:47:18 -- common/autotest_common.sh@877 -- # return 0 00:31:42.232 16:47:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:42.232 16:47:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:42.232 16:47:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:42.232 16:47:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:42.232 16:47:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:42.489 { 00:31:42.489 "nbd_device": "/dev/nbd0", 00:31:42.489 "bdev_name": "raid5f" 00:31:42.489 } 00:31:42.489 ]' 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:42.489 { 00:31:42.489 "nbd_device": "/dev/nbd0", 00:31:42.489 "bdev_name": "raid5f" 00:31:42.489 } 00:31:42.489 ]' 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@65 -- # count=1 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@66 -- # echo 1 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@95 -- # count=1 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:42.489 256+0 records in 00:31:42.489 256+0 records out 00:31:42.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103493 s, 101 MB/s 00:31:42.489 16:47:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:42.490 256+0 records in 00:31:42.490 256+0 records out 00:31:42.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267345 s, 39.2 MB/s 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@51 -- # local i 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:42.490 16:47:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:42.747 16:47:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:42.747 16:47:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:42.747 16:47:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:42.747 16:47:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:42.747 16:47:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:42.747 16:47:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:42.747 16:47:19 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:43.004 16:47:19 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:43.004 16:47:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:43.004 16:47:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:43.004 16:47:19 -- bdev/nbd_common.sh@41 -- # break 00:31:43.004 16:47:19 -- bdev/nbd_common.sh@45 -- # return 0 00:31:43.004 16:47:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:43.004 16:47:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:43.004 16:47:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@65 -- # true 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@65 -- # count=0 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@104 -- # count=0 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@109 -- # return 0 00:31:43.262 16:47:19 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:43.262 16:47:19 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:43.520 malloc_lvol_verify 00:31:43.520 16:47:20 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:43.778 d5e3700b-cacc-423d-b428-768ca97316a4 00:31:43.778 16:47:20 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:43.778 715fdca2-9d3c-4540-8a54-2d2abbc90b0f 00:31:43.778 16:47:20 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:44.036 /dev/nbd0 00:31:44.036 16:47:20 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:44.294 mke2fs 1.45.5 (07-Jan-2020) 00:31:44.294 00:31:44.294 Filesystem too small for a journal 00:31:44.294 Creating filesystem with 1024 4k blocks and 1024 inodes 00:31:44.294 00:31:44.294 Allocating group tables: 0/1 done 00:31:44.294 Writing inode tables: 0/1 done 00:31:44.294 Writing superblocks and filesystem accounting information: 0/1 done 00:31:44.294 00:31:44.294 16:47:20 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:44.294 16:47:20 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:44.294 16:47:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:44.294 16:47:20 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:44.294 16:47:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:44.294 16:47:20 -- bdev/nbd_common.sh@51 -- # local i 00:31:44.294 16:47:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:44.294 16:47:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:44.294 16:47:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:44.294 16:47:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:44.294 16:47:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:44.294 16:47:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:44.294 16:47:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:44.294 16:47:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:44.294 16:47:21 -- bdev/nbd_common.sh@41 -- # break 00:31:44.294 16:47:21 -- bdev/nbd_common.sh@45 -- # return 0 00:31:44.294 16:47:21 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:44.294 16:47:21 -- bdev/nbd_common.sh@147 -- # return 0 00:31:44.294 16:47:21 -- bdev/blockdev.sh@324 -- # killprocess 145339 00:31:44.294 16:47:21 -- common/autotest_common.sh@926 -- # '[' -z 145339 ']' 00:31:44.294 16:47:21 -- common/autotest_common.sh@930 -- # kill -0 145339 00:31:44.294 16:47:21 -- common/autotest_common.sh@931 -- # uname 00:31:44.294 16:47:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:44.294 16:47:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 145339 00:31:44.294 16:47:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:44.294 16:47:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:44.294 16:47:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 145339' 00:31:44.294 killing process with pid 145339 00:31:44.294 16:47:21 -- common/autotest_common.sh@945 -- # kill 145339 00:31:44.294 16:47:21 -- common/autotest_common.sh@950 -- # wait 145339 00:31:45.667 ************************************ 00:31:45.667 END TEST bdev_nbd 00:31:45.667 ************************************ 00:31:45.667 16:47:22 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:31:45.667 00:31:45.667 real 0m5.539s 00:31:45.667 user 0m7.809s 00:31:45.667 sys 0m1.038s 00:31:45.667 16:47:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:45.667 16:47:22 -- common/autotest_common.sh@10 -- # set +x 00:31:45.667 16:47:22 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:31:45.667 16:47:22 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:31:45.667 16:47:22 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:31:45.667 16:47:22 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:31:45.667 16:47:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:31:45.667 16:47:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:45.667 16:47:22 -- common/autotest_common.sh@10 -- # set +x 00:31:45.667 ************************************ 00:31:45.667 START TEST bdev_fio 00:31:45.667 ************************************ 00:31:45.667 16:47:22 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:31:45.667 16:47:22 -- bdev/blockdev.sh@329 -- # local env_context 00:31:45.667 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:31:45.667 16:47:22 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:31:45.667 16:47:22 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:31:45.667 16:47:22 -- bdev/blockdev.sh@337 -- # echo '' 00:31:45.667 16:47:22 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:31:45.667 16:47:22 -- bdev/blockdev.sh@337 -- # env_context= 00:31:45.667 16:47:22 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:31:45.667 16:47:22 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:45.667 16:47:22 -- common/autotest_common.sh@1260 -- # local workload=verify 00:31:45.667 16:47:22 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:31:45.667 16:47:22 -- common/autotest_common.sh@1262 -- # local env_context= 00:31:45.667 16:47:22 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:31:45.667 16:47:22 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:45.667 16:47:22 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:31:45.667 16:47:22 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:31:45.667 16:47:22 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:45.667 16:47:22 -- common/autotest_common.sh@1280 -- # cat 00:31:45.667 16:47:22 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:31:45.667 16:47:22 -- common/autotest_common.sh@1293 -- # cat 00:31:45.667 16:47:22 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:31:45.667 16:47:22 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:31:45.667 16:47:22 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:31:45.667 16:47:22 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:31:45.667 16:47:22 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:31:45.667 16:47:22 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:31:45.667 16:47:22 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:31:45.667 16:47:22 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:31:45.668 16:47:22 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:45.668 16:47:22 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:31:45.668 16:47:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:45.668 16:47:22 -- common/autotest_common.sh@10 -- # set +x 00:31:45.668 ************************************ 00:31:45.668 START TEST bdev_fio_rw_verify 00:31:45.668 ************************************ 00:31:45.668 16:47:22 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:45.668 16:47:22 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:45.668 16:47:22 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:45.668 16:47:22 -- common/autotest_common.sh@1318 -- # sanitizers=(libasan libclang_rt.asan) 00:31:45.668 16:47:22 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:45.668 16:47:22 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:45.668 16:47:22 -- common/autotest_common.sh@1320 -- # shift 00:31:45.668 16:47:22 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:45.668 16:47:22 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.668 16:47:22 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:45.668 16:47:22 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:45.668 16:47:22 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:45.668 16:47:22 -- common/autotest_common.sh@1324 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:31:45.668 16:47:22 -- common/autotest_common.sh@1325 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:31:45.668 16:47:22 -- common/autotest_common.sh@1326 -- # break 00:31:45.668 16:47:22 -- common/autotest_common.sh@1331 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:45.668 16:47:22 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:45.926 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:45.926 fio-3.35 00:31:45.926 Starting 1 thread 00:31:58.121 00:31:58.122 job_raid5f: (groupid=0, jobs=1): err= 0: pid=145576: Thu Jul 11 16:47:33 2024 00:31:58.122 read: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(456MiB/10001msec) 00:31:58.122 slat (usec): min=18, max=107, avg=20.81, stdev= 3.14 00:31:58.122 clat (usec): min=10, max=436, avg=135.65, stdev=50.56 00:31:58.122 lat (usec): min=31, max=475, avg=156.45, stdev=51.38 00:31:58.122 clat percentiles (usec): 00:31:58.122 | 50.000th=[ 137], 99.000th=[ 249], 99.900th=[ 318], 99.990th=[ 363], 00:31:58.122 | 99.999th=[ 408] 00:31:58.122 write: IOPS=12.2k, BW=47.8MiB/s (50.2MB/s)(472MiB/9871msec); 0 zone resets 00:31:58.122 slat (usec): min=8, max=1104, avg=17.98, stdev= 6.11 00:31:58.122 clat (usec): min=56, max=1442, avg=309.03, stdev=47.70 00:31:58.122 lat (usec): min=73, max=1576, avg=327.01, stdev=49.50 00:31:58.122 clat percentiles (usec): 00:31:58.122 | 50.000th=[ 310], 99.000th=[ 445], 99.900th=[ 537], 99.990th=[ 914], 00:31:58.122 | 99.999th=[ 1418] 00:31:58.122 bw ( KiB/s): min=42328, max=53488, per=98.60%, avg=48308.63, stdev=3040.10, samples=19 00:31:58.122 iops : min=10582, max=13372, avg=12077.16, stdev=760.02, samples=19 00:31:58.122 lat (usec) : 20=0.01%, 50=0.01%, 100=15.12%, 250=39.19%, 500=45.53% 00:31:58.122 lat (usec) : 750=0.14%, 1000=0.01% 00:31:58.122 lat (msec) : 2=0.01% 00:31:58.122 cpu : usr=99.61%, sys=0.34%, ctx=112, majf=0, minf=8298 00:31:58.122 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:58.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.122 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.122 issued rwts: total=116810,120902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.122 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:58.122 00:31:58.122 Run status group 0 (all jobs): 00:31:58.122 READ: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=456MiB (478MB), run=10001-10001msec 00:31:58.122 WRITE: bw=47.8MiB/s (50.2MB/s), 47.8MiB/s-47.8MiB/s (50.2MB/s-50.2MB/s), io=472MiB (495MB), run=9871-9871msec 00:31:58.122 ----------------------------------------------------- 00:31:58.122 Suppressions used: 00:31:58.122 count bytes template 00:31:58.122 1 7 /usr/src/fio/parse.c 00:31:58.122 466 44736 /usr/src/fio/iolog.c 00:31:58.122 2 596 libcrypto.so 00:31:58.122 ----------------------------------------------------- 00:31:58.122 00:31:58.122 00:31:58.122 real 0m12.282s 00:31:58.122 user 0m12.737s 00:31:58.122 sys 0m0.554s 00:31:58.122 16:47:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.122 ************************************ 00:31:58.122 16:47:34 -- common/autotest_common.sh@10 -- # set +x 00:31:58.122 END TEST bdev_fio_rw_verify 00:31:58.122 ************************************ 00:31:58.122 16:47:34 -- bdev/blockdev.sh@348 -- # rm -f 00:31:58.122 16:47:34 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:58.122 16:47:34 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:31:58.122 16:47:34 -- common/autotest_common.sh@1259 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:58.122 16:47:34 -- common/autotest_common.sh@1260 -- # local workload=trim 00:31:58.122 16:47:34 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:31:58.122 16:47:34 -- common/autotest_common.sh@1262 -- # local env_context= 00:31:58.122 16:47:34 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:31:58.122 16:47:34 -- common/autotest_common.sh@1265 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:58.122 16:47:34 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:31:58.122 16:47:34 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:31:58.122 16:47:34 -- common/autotest_common.sh@1278 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:58.122 16:47:34 -- common/autotest_common.sh@1280 -- # cat 00:31:58.122 16:47:34 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:31:58.122 16:47:34 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:31:58.122 16:47:34 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:31:58.122 16:47:34 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:58.122 16:47:34 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fec0e467-7a67-476b-8e72-aa302cca1675"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fec0e467-7a67-476b-8e72-aa302cca1675",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fec0e467-7a67-476b-8e72-aa302cca1675",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "dcd522c3-14e6-40dd-9391-6a208b180ee4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "ab9d9492-e855-4e3d-ae16-550ac3b964ec",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c2a02a48-ad96-4627-a722-cd9ff750f7e3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:58.122 16:47:34 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:31:58.122 16:47:34 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:58.122 /home/vagrant/spdk_repo/spdk 00:31:58.122 16:47:34 -- bdev/blockdev.sh@360 -- # popd 00:31:58.122 16:47:34 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:31:58.122 16:47:34 -- bdev/blockdev.sh@362 -- # return 0 00:31:58.122 00:31:58.122 real 0m12.447s 00:31:58.122 user 0m12.841s 00:31:58.122 sys 0m0.612s 00:31:58.122 16:47:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:58.122 ************************************ 00:31:58.122 16:47:34 -- common/autotest_common.sh@10 -- # set +x 00:31:58.122 END TEST bdev_fio 00:31:58.122 ************************************ 00:31:58.122 16:47:34 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:58.122 16:47:34 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:58.122 16:47:34 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:31:58.122 16:47:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:58.122 16:47:34 -- common/autotest_common.sh@10 -- # set +x 00:31:58.122 ************************************ 00:31:58.122 START TEST bdev_verify 00:31:58.122 ************************************ 00:31:58.122 16:47:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:58.122 [2024-07-11 16:47:34.865866] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:31:58.122 [2024-07-11 16:47:34.866059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145761 ] 00:31:58.382 [2024-07-11 16:47:35.038537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:58.639 [2024-07-11 16:47:35.222932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.639 [2024-07-11 16:47:35.222927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:58.896 Running I/O for 5 seconds... 00:32:04.162 00:32:04.162 Latency(us) 00:32:04.162 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.162 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:04.162 Verification LBA range: start 0x0 length 0x2000 00:32:04.162 raid5f : 5.01 11980.23 46.80 0.00 0.00 16927.55 262.52 14298.76 00:32:04.162 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:04.162 Verification LBA range: start 0x2000 length 0x2000 00:32:04.162 raid5f : 5.01 12004.54 46.89 0.00 0.00 16894.34 208.52 14239.19 00:32:04.162 =================================================================================================================== 00:32:04.162 Total : 23984.78 93.69 0.00 0.00 16910.92 208.52 14298.76 00:32:05.097 00:32:05.097 real 0m7.015s 00:32:05.097 user 0m12.924s 00:32:05.097 sys 0m0.240s 00:32:05.097 ************************************ 00:32:05.097 END TEST bdev_verify 00:32:05.097 ************************************ 00:32:05.097 16:47:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.097 16:47:41 -- common/autotest_common.sh@10 -- # set +x 00:32:05.097 16:47:41 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:05.097 16:47:41 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:32:05.097 16:47:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:05.097 16:47:41 -- common/autotest_common.sh@10 -- # set +x 00:32:05.097 ************************************ 00:32:05.097 START TEST bdev_verify_big_io 00:32:05.097 ************************************ 00:32:05.097 16:47:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:05.356 [2024-07-11 16:47:41.912504] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:05.356 [2024-07-11 16:47:41.912670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145882 ] 00:32:05.356 [2024-07-11 16:47:42.066732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:05.614 [2024-07-11 16:47:42.242433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.614 [2024-07-11 16:47:42.242441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.181 Running I/O for 5 seconds... 00:32:11.440 00:32:11.440 Latency(us) 00:32:11.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:11.440 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:11.440 Verification LBA range: start 0x0 length 0x200 00:32:11.440 raid5f : 5.13 812.35 50.77 0.00 0.00 4108355.08 131.26 127735.62 00:32:11.440 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:11.440 Verification LBA range: start 0x200 length 0x200 00:32:11.440 raid5f : 5.13 813.85 50.87 0.00 0.00 4098908.03 242.04 126782.37 00:32:11.440 =================================================================================================================== 00:32:11.440 Total : 1626.21 101.64 0.00 0.00 4103627.59 131.26 127735.62 00:32:12.373 00:32:12.373 real 0m7.158s 00:32:12.373 user 0m13.239s 00:32:12.373 sys 0m0.245s 00:32:12.373 ************************************ 00:32:12.373 16:47:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:12.373 16:47:49 -- common/autotest_common.sh@10 -- # set +x 00:32:12.373 END TEST bdev_verify_big_io 00:32:12.373 ************************************ 00:32:12.373 16:47:49 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:12.373 16:47:49 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:12.373 16:47:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:12.373 16:47:49 -- common/autotest_common.sh@10 -- # set +x 00:32:12.373 ************************************ 00:32:12.373 START TEST bdev_write_zeroes 00:32:12.373 ************************************ 00:32:12.373 16:47:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:12.373 [2024-07-11 16:47:49.143720] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:12.373 [2024-07-11 16:47:49.144108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146004 ] 00:32:12.630 [2024-07-11 16:47:49.310717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:12.888 [2024-07-11 16:47:49.494243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.145 Running I/O for 1 seconds... 00:32:14.520 00:32:14.520 Latency(us) 00:32:14.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:14.520 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:14.520 raid5f : 1.01 27556.42 107.64 0.00 0.00 4630.25 1437.32 5779.08 00:32:14.520 =================================================================================================================== 00:32:14.520 Total : 27556.42 107.64 0.00 0.00 4630.25 1437.32 5779.08 00:32:15.454 00:32:15.454 real 0m2.989s 00:32:15.454 user 0m2.605s 00:32:15.454 sys 0m0.270s 00:32:15.454 ************************************ 00:32:15.454 END TEST bdev_write_zeroes 00:32:15.454 ************************************ 00:32:15.454 16:47:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:15.454 16:47:52 -- common/autotest_common.sh@10 -- # set +x 00:32:15.454 16:47:52 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:15.454 16:47:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:15.454 16:47:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:15.454 16:47:52 -- common/autotest_common.sh@10 -- # set +x 00:32:15.454 ************************************ 00:32:15.454 START TEST bdev_json_nonenclosed 00:32:15.454 ************************************ 00:32:15.454 16:47:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:15.454 [2024-07-11 16:47:52.172795] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:15.454 [2024-07-11 16:47:52.172995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146062 ] 00:32:15.713 [2024-07-11 16:47:52.326374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.713 [2024-07-11 16:47:52.509979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.713 [2024-07-11 16:47:52.510204] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:15.713 [2024-07-11 16:47:52.510256] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:16.282 00:32:16.282 real 0m0.736s 00:32:16.282 user 0m0.515s 00:32:16.282 sys 0m0.120s 00:32:16.282 ************************************ 00:32:16.282 END TEST bdev_json_nonenclosed 00:32:16.282 16:47:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:16.282 16:47:52 -- common/autotest_common.sh@10 -- # set +x 00:32:16.282 ************************************ 00:32:16.282 16:47:52 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:16.282 16:47:52 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:32:16.282 16:47:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:16.282 16:47:52 -- common/autotest_common.sh@10 -- # set +x 00:32:16.282 ************************************ 00:32:16.282 START TEST bdev_json_nonarray 00:32:16.282 ************************************ 00:32:16.282 16:47:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:16.282 [2024-07-11 16:47:52.974524] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:32:16.282 [2024-07-11 16:47:52.974709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146093 ] 00:32:16.554 [2024-07-11 16:47:53.137897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.554 [2024-07-11 16:47:53.331093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.554 [2024-07-11 16:47:53.331313] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:16.554 [2024-07-11 16:47:53.331362] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:17.167 00:32:17.167 real 0m0.766s 00:32:17.167 user 0m0.541s 00:32:17.167 sys 0m0.125s 00:32:17.167 ************************************ 00:32:17.167 END TEST bdev_json_nonarray 00:32:17.167 ************************************ 00:32:17.167 16:47:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:17.167 16:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:17.167 16:47:53 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:32:17.167 16:47:53 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:32:17.167 16:47:53 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:32:17.167 16:47:53 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:32:17.167 16:47:53 -- bdev/blockdev.sh@809 -- # cleanup 00:32:17.167 16:47:53 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:32:17.167 16:47:53 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:17.167 16:47:53 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:32:17.167 16:47:53 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:32:17.167 16:47:53 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:32:17.167 16:47:53 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:32:17.167 ************************************ 00:32:17.167 END TEST blockdev_raid5f 00:32:17.167 ************************************ 00:32:17.167 00:32:17.167 real 0m45.506s 00:32:17.167 user 1m2.402s 00:32:17.167 sys 0m3.859s 00:32:17.167 16:47:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:17.167 16:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:17.167 16:47:53 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:17.167 16:47:53 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:17.167 16:47:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:17.167 16:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:17.167 16:47:53 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:17.167 16:47:53 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:17.167 16:47:53 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:17.167 16:47:53 -- common/autotest_common.sh@10 -- # set +x 00:32:18.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:18.540 Waiting for block devices as requested 00:32:18.540 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:32:18.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:32:19.056 Cleaning 00:32:19.056 Removing: /var/run/dpdk/spdk0/config 00:32:19.056 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:19.056 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:19.056 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:19.056 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:19.056 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:19.056 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:19.056 Removing: /dev/shm/spdk_tgt_trace.pid105027 00:32:19.056 Removing: /var/run/dpdk/spdk0 00:32:19.056 Removing: /var/run/dpdk/spdk_pid104768 00:32:19.056 Removing: /var/run/dpdk/spdk_pid105027 00:32:19.056 Removing: /var/run/dpdk/spdk_pid105323 00:32:19.056 Removing: /var/run/dpdk/spdk_pid105607 00:32:19.056 Removing: /var/run/dpdk/spdk_pid105784 00:32:19.056 Removing: /var/run/dpdk/spdk_pid105919 00:32:19.056 Removing: /var/run/dpdk/spdk_pid106024 00:32:19.056 Removing: /var/run/dpdk/spdk_pid106175 00:32:19.056 Removing: /var/run/dpdk/spdk_pid106289 00:32:19.056 Removing: /var/run/dpdk/spdk_pid106342 00:32:19.056 Removing: /var/run/dpdk/spdk_pid106392 00:32:19.056 Removing: /var/run/dpdk/spdk_pid106481 00:32:19.056 Removing: /var/run/dpdk/spdk_pid106604 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107187 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107291 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107365 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107400 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107557 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107587 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107742 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107777 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107847 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107892 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107960 00:32:19.056 Removing: /var/run/dpdk/spdk_pid107991 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108200 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108245 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108286 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108371 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108483 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108529 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108612 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108657 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108705 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108755 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108809 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108843 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108890 00:32:19.056 Removing: /var/run/dpdk/spdk_pid108926 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109000 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109033 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109080 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109121 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109186 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109220 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109274 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109308 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109372 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109417 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109469 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109501 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109548 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109609 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109656 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109691 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109751 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109804 00:32:19.056 Removing: /var/run/dpdk/spdk_pid109860 00:32:19.057 Removing: /var/run/dpdk/spdk_pid109894 00:32:19.057 Removing: /var/run/dpdk/spdk_pid109953 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110009 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110056 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110097 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110144 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110211 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110262 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110299 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110356 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110404 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110466 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110500 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110547 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110637 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110781 00:32:19.057 Removing: /var/run/dpdk/spdk_pid110961 00:32:19.057 Removing: /var/run/dpdk/spdk_pid111074 00:32:19.315 Removing: /var/run/dpdk/spdk_pid111136 00:32:19.315 Removing: /var/run/dpdk/spdk_pid112473 00:32:19.315 Removing: /var/run/dpdk/spdk_pid112712 00:32:19.315 Removing: /var/run/dpdk/spdk_pid112958 00:32:19.315 Removing: /var/run/dpdk/spdk_pid113099 00:32:19.315 Removing: /var/run/dpdk/spdk_pid113229 00:32:19.315 Removing: /var/run/dpdk/spdk_pid113316 00:32:19.315 Removing: /var/run/dpdk/spdk_pid113354 00:32:19.315 Removing: /var/run/dpdk/spdk_pid113383 00:32:19.315 Removing: /var/run/dpdk/spdk_pid113903 00:32:19.315 Removing: /var/run/dpdk/spdk_pid113990 00:32:19.315 Removing: /var/run/dpdk/spdk_pid114130 00:32:19.315 Removing: /var/run/dpdk/spdk_pid114188 00:32:19.315 Removing: /var/run/dpdk/spdk_pid115446 00:32:19.315 Removing: /var/run/dpdk/spdk_pid116397 00:32:19.315 Removing: /var/run/dpdk/spdk_pid117294 00:32:19.315 Removing: /var/run/dpdk/spdk_pid118484 00:32:19.315 Removing: /var/run/dpdk/spdk_pid119600 00:32:19.315 Removing: /var/run/dpdk/spdk_pid120740 00:32:19.315 Removing: /var/run/dpdk/spdk_pid122285 00:32:19.315 Removing: /var/run/dpdk/spdk_pid123558 00:32:19.315 Removing: /var/run/dpdk/spdk_pid124823 00:32:19.315 Removing: /var/run/dpdk/spdk_pid125513 00:32:19.315 Removing: /var/run/dpdk/spdk_pid126082 00:32:19.315 Removing: /var/run/dpdk/spdk_pid126741 00:32:19.315 Removing: /var/run/dpdk/spdk_pid127233 00:32:19.316 Removing: /var/run/dpdk/spdk_pid127817 00:32:19.316 Removing: /var/run/dpdk/spdk_pid128396 00:32:19.316 Removing: /var/run/dpdk/spdk_pid129085 00:32:19.316 Removing: /var/run/dpdk/spdk_pid129638 00:32:19.316 Removing: /var/run/dpdk/spdk_pid131092 00:32:19.316 Removing: /var/run/dpdk/spdk_pid131732 00:32:19.316 Removing: /var/run/dpdk/spdk_pid132319 00:32:19.316 Removing: /var/run/dpdk/spdk_pid133924 00:32:19.316 Removing: /var/run/dpdk/spdk_pid134628 00:32:19.316 Removing: /var/run/dpdk/spdk_pid135285 00:32:19.316 Removing: /var/run/dpdk/spdk_pid136103 00:32:19.316 Removing: /var/run/dpdk/spdk_pid136171 00:32:19.316 Removing: /var/run/dpdk/spdk_pid136234 00:32:19.316 Removing: /var/run/dpdk/spdk_pid136296 00:32:19.316 Removing: /var/run/dpdk/spdk_pid136440 00:32:19.316 Removing: /var/run/dpdk/spdk_pid136592 00:32:19.316 Removing: /var/run/dpdk/spdk_pid136829 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137114 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137141 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137209 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137229 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137261 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137289 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137320 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137362 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137393 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137421 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137442 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137474 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137501 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137547 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137579 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137599 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137632 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137659 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137684 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137732 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137779 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137807 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137850 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137919 00:32:19.316 Removing: /var/run/dpdk/spdk_pid137973 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138019 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138057 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138085 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138104 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138168 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138188 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138251 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138274 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138301 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138325 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138342 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138370 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138408 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138432 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138470 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138524 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138544 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138589 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138616 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138655 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138719 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138739 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138782 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138806 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138832 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138868 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138892 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138920 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138938 00:32:19.316 Removing: /var/run/dpdk/spdk_pid138962 00:32:19.316 Removing: /var/run/dpdk/spdk_pid139044 00:32:19.316 Removing: /var/run/dpdk/spdk_pid139158 00:32:19.316 Removing: /var/run/dpdk/spdk_pid139318 00:32:19.316 Removing: /var/run/dpdk/spdk_pid139353 00:32:19.575 Removing: /var/run/dpdk/spdk_pid139398 00:32:19.575 Removing: /var/run/dpdk/spdk_pid139464 00:32:19.575 Removing: /var/run/dpdk/spdk_pid139516 00:32:19.575 Removing: /var/run/dpdk/spdk_pid139545 00:32:19.575 Removing: /var/run/dpdk/spdk_pid139579 00:32:19.575 Removing: /var/run/dpdk/spdk_pid139623 00:32:19.575 Removing: /var/run/dpdk/spdk_pid139649 00:32:19.575 Removing: /var/run/dpdk/spdk_pid139731 00:32:19.575 Removing: /var/run/dpdk/spdk_pid139818 00:32:19.575 Removing: /var/run/dpdk/spdk_pid139869 00:32:19.575 Removing: /var/run/dpdk/spdk_pid140130 00:32:19.575 Removing: /var/run/dpdk/spdk_pid140264 00:32:19.575 Removing: /var/run/dpdk/spdk_pid140305 00:32:19.575 Removing: /var/run/dpdk/spdk_pid140418 00:32:19.575 Removing: /var/run/dpdk/spdk_pid140511 00:32:19.575 Removing: /var/run/dpdk/spdk_pid140549 00:32:19.575 Removing: /var/run/dpdk/spdk_pid140836 00:32:19.575 Removing: /var/run/dpdk/spdk_pid141059 00:32:19.575 Removing: /var/run/dpdk/spdk_pid141172 00:32:19.575 Removing: /var/run/dpdk/spdk_pid141228 00:32:19.575 Removing: /var/run/dpdk/spdk_pid141250 00:32:19.575 Removing: /var/run/dpdk/spdk_pid141333 00:32:19.575 Removing: /var/run/dpdk/spdk_pid141886 00:32:19.575 Removing: /var/run/dpdk/spdk_pid141936 00:32:19.575 Removing: /var/run/dpdk/spdk_pid142286 00:32:19.575 Removing: /var/run/dpdk/spdk_pid142442 00:32:19.575 Removing: /var/run/dpdk/spdk_pid142576 00:32:19.575 Removing: /var/run/dpdk/spdk_pid142627 00:32:19.575 Removing: /var/run/dpdk/spdk_pid142659 00:32:19.575 Removing: /var/run/dpdk/spdk_pid142697 00:32:19.575 Removing: /var/run/dpdk/spdk_pid144150 00:32:19.575 Removing: /var/run/dpdk/spdk_pid144298 00:32:19.575 Removing: /var/run/dpdk/spdk_pid144302 00:32:19.575 Removing: /var/run/dpdk/spdk_pid144328 00:32:19.575 Removing: /var/run/dpdk/spdk_pid144843 00:32:19.575 Removing: /var/run/dpdk/spdk_pid144950 00:32:19.575 Removing: /var/run/dpdk/spdk_pid145131 00:32:19.575 Removing: /var/run/dpdk/spdk_pid145205 00:32:19.575 Removing: /var/run/dpdk/spdk_pid145251 00:32:19.575 Removing: /var/run/dpdk/spdk_pid145563 00:32:19.575 Removing: /var/run/dpdk/spdk_pid145761 00:32:19.575 Removing: /var/run/dpdk/spdk_pid145882 00:32:19.575 Removing: /var/run/dpdk/spdk_pid146004 00:32:19.575 Removing: /var/run/dpdk/spdk_pid146062 00:32:19.575 Removing: /var/run/dpdk/spdk_pid146093 00:32:19.575 Clean 00:32:19.575 killing process with pid 93869 00:32:19.575 killing process with pid 93952 00:32:19.575 16:47:56 -- common/autotest_common.sh@1436 -- # return 0 00:32:19.575 16:47:56 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:19.575 16:47:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:19.575 16:47:56 -- common/autotest_common.sh@10 -- # set +x 00:32:19.833 16:47:56 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:19.833 16:47:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:19.833 16:47:56 -- common/autotest_common.sh@10 -- # set +x 00:32:19.833 16:47:56 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:19.833 16:47:56 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:19.833 16:47:56 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:19.833 16:47:56 -- spdk/autotest.sh@394 -- # hash lcov 00:32:19.833 16:47:56 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:19.833 16:47:56 -- spdk/autotest.sh@396 -- # hostname 00:32:19.833 16:47:56 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:20.092 geninfo: WARNING: invalid characters removed from testname! 00:32:58.805 16:48:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:04.070 16:48:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:06.640 16:48:43 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:09.927 16:48:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:13.217 16:48:49 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:16.505 16:48:52 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:19.035 16:48:55 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:19.035 16:48:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:19.035 16:48:55 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:19.035 16:48:55 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.035 16:48:55 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.035 16:48:55 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:19.035 16:48:55 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:19.035 16:48:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:19.035 16:48:55 -- paths/export.sh@5 -- $ export PATH 00:33:19.035 16:48:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:33:19.035 16:48:55 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:33:19.035 16:48:55 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:19.035 16:48:55 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720716535.XXXXXX 00:33:19.035 16:48:55 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720716535.bzfnLF 00:33:19.035 16:48:55 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:19.035 16:48:55 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:33:19.035 16:48:55 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:33:19.035 16:48:55 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:33:19.035 16:48:55 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:33:19.035 16:48:55 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:19.035 16:48:55 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:19.035 16:48:55 -- common/autotest_common.sh@10 -- $ set +x 00:33:19.035 16:48:55 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:33:19.035 16:48:55 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:33:19.035 16:48:55 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:33:19.035 16:48:55 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:19.035 16:48:55 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:19.035 16:48:55 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:19.035 16:48:55 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:33:19.035 16:48:55 -- common/autotest_common.sh@712 -- $ xtrace_disable 00:33:19.035 16:48:55 -- common/autotest_common.sh@10 -- $ set +x 00:33:19.035 16:48:55 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:33:19.035 16:48:55 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:33:19.035 16:48:55 -- spdk/autopackage.sh@40 -- $ get_config_params 00:33:19.035 16:48:55 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:33:19.035 16:48:55 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:19.035 16:48:55 -- common/autotest_common.sh@10 -- $ set +x 00:33:19.035 16:48:55 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:33:19.035 16:48:55 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto 00:33:19.035 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:33:19.035 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:33:19.294 Using 'verbs' RDMA provider 00:33:32.059 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:33:44.288 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:33:44.288 Creating mk/config.mk...done. 00:33:44.288 Creating mk/cc.flags.mk...done. 00:33:44.288 Type 'make' to build. 00:33:44.288 16:49:19 -- spdk/autopackage.sh@43 -- $ make -j10 00:33:44.288 make[1]: Nothing to be done for 'all'. 00:33:48.472 The Meson build system 00:33:48.472 Version: 1.4.0 00:33:48.472 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:33:48.472 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:33:48.472 Build type: native build 00:33:48.472 Program cat found: YES (/usr/bin/cat) 00:33:48.472 Project name: DPDK 00:33:48.472 Project version: 23.11.0 00:33:48.472 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:33:48.472 C linker for the host machine: cc ld.bfd 2.34 00:33:48.472 Host machine cpu family: x86_64 00:33:48.472 Host machine cpu: x86_64 00:33:48.472 Message: ## Building in Developer Mode ## 00:33:48.472 Program pkg-config found: YES (/usr/bin/pkg-config) 00:33:48.472 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:33:48.472 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:33:48.472 Program python3 found: YES (/usr/bin/python3) 00:33:48.472 Program cat found: YES (/usr/bin/cat) 00:33:48.472 Compiler for C supports arguments -march=native: YES 00:33:48.472 Checking for size of "void *" : 8 00:33:48.472 Checking for size of "void *" : 8 (cached) 00:33:48.472 Library m found: YES 00:33:48.472 Library numa found: YES 00:33:48.472 Has header "numaif.h" : YES 00:33:48.472 Library fdt found: NO 00:33:48.472 Library execinfo found: NO 00:33:48.472 Has header "execinfo.h" : YES 00:33:48.472 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:33:48.472 Run-time dependency libarchive found: NO (tried pkgconfig) 00:33:48.472 Run-time dependency libbsd found: NO (tried pkgconfig) 00:33:48.472 Run-time dependency jansson found: NO (tried pkgconfig) 00:33:48.472 Run-time dependency openssl found: YES 1.1.1f 00:33:48.472 Run-time dependency libpcap found: NO (tried pkgconfig) 00:33:48.472 Library pcap found: NO 00:33:48.472 Compiler for C supports arguments -Wcast-qual: YES 00:33:48.472 Compiler for C supports arguments -Wdeprecated: YES 00:33:48.472 Compiler for C supports arguments -Wformat: YES 00:33:48.472 Compiler for C supports arguments -Wformat-nonliteral: YES 00:33:48.472 Compiler for C supports arguments -Wformat-security: YES 00:33:48.472 Compiler for C supports arguments -Wmissing-declarations: YES 00:33:48.472 Compiler for C supports arguments -Wmissing-prototypes: YES 00:33:48.472 Compiler for C supports arguments -Wnested-externs: YES 00:33:48.472 Compiler for C supports arguments -Wold-style-definition: YES 00:33:48.472 Compiler for C supports arguments -Wpointer-arith: YES 00:33:48.472 Compiler for C supports arguments -Wsign-compare: YES 00:33:48.472 Compiler for C supports arguments -Wstrict-prototypes: YES 00:33:48.472 Compiler for C supports arguments -Wundef: YES 00:33:48.472 Compiler for C supports arguments -Wwrite-strings: YES 00:33:48.473 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:33:48.473 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:33:48.473 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:33:48.473 Program objdump found: YES (/usr/bin/objdump) 00:33:48.473 Compiler for C supports arguments -mavx512f: YES 00:33:48.473 Checking if "AVX512 checking" compiles: YES 00:33:48.473 Fetching value of define "__SSE4_2__" : 1 00:33:48.473 Fetching value of define "__AES__" : 1 00:33:48.473 Fetching value of define "__AVX__" : 1 00:33:48.473 Fetching value of define "__AVX2__" : 1 00:33:48.473 Fetching value of define "__AVX512BW__" : (undefined) 00:33:48.473 Fetching value of define "__AVX512CD__" : (undefined) 00:33:48.473 Fetching value of define "__AVX512DQ__" : (undefined) 00:33:48.473 Fetching value of define "__AVX512F__" : (undefined) 00:33:48.473 Fetching value of define "__AVX512VL__" : (undefined) 00:33:48.473 Fetching value of define "__PCLMUL__" : 1 00:33:48.473 Fetching value of define "__RDRND__" : 1 00:33:48.473 Fetching value of define "__RDSEED__" : 1 00:33:48.473 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:33:48.473 Fetching value of define "__znver1__" : (undefined) 00:33:48.473 Fetching value of define "__znver2__" : (undefined) 00:33:48.473 Fetching value of define "__znver3__" : (undefined) 00:33:48.473 Fetching value of define "__znver4__" : (undefined) 00:33:48.473 Compiler for C supports arguments -ffat-lto-objects: YES 00:33:48.473 Library asan found: YES 00:33:48.473 Compiler for C supports arguments -Wno-format-truncation: YES 00:33:48.473 Message: lib/log: Defining dependency "log" 00:33:48.473 Message: lib/kvargs: Defining dependency "kvargs" 00:33:48.473 Message: lib/telemetry: Defining dependency "telemetry" 00:33:48.473 Library rt found: YES 00:33:48.473 Checking for function "getentropy" : NO 00:33:48.473 Message: lib/eal: Defining dependency "eal" 00:33:48.473 Message: lib/ring: Defining dependency "ring" 00:33:48.473 Message: lib/rcu: Defining dependency "rcu" 00:33:48.473 Message: lib/mempool: Defining dependency "mempool" 00:33:48.473 Message: lib/mbuf: Defining dependency "mbuf" 00:33:48.473 Fetching value of define "__PCLMUL__" : 1 (cached) 00:33:48.473 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:33:48.473 Compiler for C supports arguments -mpclmul: YES 00:33:48.473 Compiler for C supports arguments -maes: YES 00:33:48.473 Compiler for C supports arguments -mavx512f: YES (cached) 00:33:48.473 Compiler for C supports arguments -mavx512bw: YES 00:33:48.473 Compiler for C supports arguments -mavx512dq: YES 00:33:48.473 Compiler for C supports arguments -mavx512vl: YES 00:33:48.473 Compiler for C supports arguments -mvpclmulqdq: YES 00:33:48.473 Compiler for C supports arguments -mavx2: YES 00:33:48.473 Compiler for C supports arguments -mavx: YES 00:33:48.473 Message: lib/net: Defining dependency "net" 00:33:48.473 Message: lib/meter: Defining dependency "meter" 00:33:48.473 Message: lib/ethdev: Defining dependency "ethdev" 00:33:48.473 Message: lib/pci: Defining dependency "pci" 00:33:48.473 Message: lib/cmdline: Defining dependency "cmdline" 00:33:48.473 Message: lib/hash: Defining dependency "hash" 00:33:48.473 Message: lib/timer: Defining dependency "timer" 00:33:48.473 Message: lib/compressdev: Defining dependency "compressdev" 00:33:48.473 Message: lib/cryptodev: Defining dependency "cryptodev" 00:33:48.473 Message: lib/dmadev: Defining dependency "dmadev" 00:33:48.473 Compiler for C supports arguments -Wno-cast-qual: YES 00:33:48.473 Message: lib/power: Defining dependency "power" 00:33:48.473 Message: lib/reorder: Defining dependency "reorder" 00:33:48.473 Message: lib/security: Defining dependency "security" 00:33:48.473 Has header "linux/userfaultfd.h" : YES 00:33:48.473 Has header "linux/vduse.h" : NO 00:33:48.473 Message: lib/vhost: Defining dependency "vhost" 00:33:48.473 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:33:48.473 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:33:48.473 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:33:48.473 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:33:48.473 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:33:48.473 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:33:48.473 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:33:48.473 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:33:48.473 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:33:48.473 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:33:48.473 Program doxygen found: YES (/usr/bin/doxygen) 00:33:48.473 Configuring doxy-api-html.conf using configuration 00:33:48.473 Configuring doxy-api-man.conf using configuration 00:33:48.473 Program mandb found: YES (/usr/bin/mandb) 00:33:48.473 Program sphinx-build found: NO 00:33:48.473 Configuring rte_build_config.h using configuration 00:33:48.473 Message: 00:33:48.473 ================= 00:33:48.473 Applications Enabled 00:33:48.473 ================= 00:33:48.473 00:33:48.473 apps: 00:33:48.473 00:33:48.473 00:33:48.473 Message: 00:33:48.473 ================= 00:33:48.473 Libraries Enabled 00:33:48.473 ================= 00:33:48.473 00:33:48.473 libs: 00:33:48.473 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:33:48.473 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:33:48.473 cryptodev, dmadev, power, reorder, security, vhost, 00:33:48.473 00:33:48.473 Message: 00:33:48.473 =============== 00:33:48.473 Drivers Enabled 00:33:48.473 =============== 00:33:48.473 00:33:48.473 common: 00:33:48.473 00:33:48.473 bus: 00:33:48.473 pci, vdev, 00:33:48.473 mempool: 00:33:48.473 ring, 00:33:48.473 dma: 00:33:48.473 00:33:48.473 net: 00:33:48.473 00:33:48.473 crypto: 00:33:48.473 00:33:48.473 compress: 00:33:48.473 00:33:48.473 vdpa: 00:33:48.473 00:33:48.473 00:33:48.473 Message: 00:33:48.473 ================= 00:33:48.473 Content Skipped 00:33:48.473 ================= 00:33:48.473 00:33:48.473 apps: 00:33:48.473 dumpcap: explicitly disabled via build config 00:33:48.473 graph: explicitly disabled via build config 00:33:48.473 pdump: explicitly disabled via build config 00:33:48.473 proc-info: explicitly disabled via build config 00:33:48.473 test-acl: explicitly disabled via build config 00:33:48.473 test-bbdev: explicitly disabled via build config 00:33:48.473 test-cmdline: explicitly disabled via build config 00:33:48.473 test-compress-perf: explicitly disabled via build config 00:33:48.473 test-crypto-perf: explicitly disabled via build config 00:33:48.473 test-dma-perf: explicitly disabled via build config 00:33:48.473 test-eventdev: explicitly disabled via build config 00:33:48.473 test-fib: explicitly disabled via build config 00:33:48.473 test-flow-perf: explicitly disabled via build config 00:33:48.473 test-gpudev: explicitly disabled via build config 00:33:48.473 test-mldev: explicitly disabled via build config 00:33:48.473 test-pipeline: explicitly disabled via build config 00:33:48.473 test-pmd: explicitly disabled via build config 00:33:48.473 test-regex: explicitly disabled via build config 00:33:48.473 test-sad: explicitly disabled via build config 00:33:48.473 test-security-perf: explicitly disabled via build config 00:33:48.473 00:33:48.473 libs: 00:33:48.473 metrics: explicitly disabled via build config 00:33:48.473 acl: explicitly disabled via build config 00:33:48.473 bbdev: explicitly disabled via build config 00:33:48.473 bitratestats: explicitly disabled via build config 00:33:48.473 bpf: explicitly disabled via build config 00:33:48.473 cfgfile: explicitly disabled via build config 00:33:48.473 distributor: explicitly disabled via build config 00:33:48.474 efd: explicitly disabled via build config 00:33:48.474 eventdev: explicitly disabled via build config 00:33:48.474 dispatcher: explicitly disabled via build config 00:33:48.474 gpudev: explicitly disabled via build config 00:33:48.474 gro: explicitly disabled via build config 00:33:48.474 gso: explicitly disabled via build config 00:33:48.474 ip_frag: explicitly disabled via build config 00:33:48.474 jobstats: explicitly disabled via build config 00:33:48.474 latencystats: explicitly disabled via build config 00:33:48.474 lpm: explicitly disabled via build config 00:33:48.474 member: explicitly disabled via build config 00:33:48.474 pcapng: explicitly disabled via build config 00:33:48.474 rawdev: explicitly disabled via build config 00:33:48.474 regexdev: explicitly disabled via build config 00:33:48.474 mldev: explicitly disabled via build config 00:33:48.474 rib: explicitly disabled via build config 00:33:48.474 sched: explicitly disabled via build config 00:33:48.474 stack: explicitly disabled via build config 00:33:48.474 ipsec: explicitly disabled via build config 00:33:48.474 pdcp: explicitly disabled via build config 00:33:48.474 fib: explicitly disabled via build config 00:33:48.474 port: explicitly disabled via build config 00:33:48.474 pdump: explicitly disabled via build config 00:33:48.474 table: explicitly disabled via build config 00:33:48.474 pipeline: explicitly disabled via build config 00:33:48.474 graph: explicitly disabled via build config 00:33:48.474 node: explicitly disabled via build config 00:33:48.474 00:33:48.474 drivers: 00:33:48.474 common/cpt: not in enabled drivers build config 00:33:48.474 common/dpaax: not in enabled drivers build config 00:33:48.474 common/iavf: not in enabled drivers build config 00:33:48.474 common/idpf: not in enabled drivers build config 00:33:48.474 common/mvep: not in enabled drivers build config 00:33:48.474 common/octeontx: not in enabled drivers build config 00:33:48.474 bus/auxiliary: not in enabled drivers build config 00:33:48.474 bus/cdx: not in enabled drivers build config 00:33:48.474 bus/dpaa: not in enabled drivers build config 00:33:48.474 bus/fslmc: not in enabled drivers build config 00:33:48.474 bus/ifpga: not in enabled drivers build config 00:33:48.474 bus/platform: not in enabled drivers build config 00:33:48.474 bus/vmbus: not in enabled drivers build config 00:33:48.474 common/cnxk: not in enabled drivers build config 00:33:48.474 common/mlx5: not in enabled drivers build config 00:33:48.474 common/nfp: not in enabled drivers build config 00:33:48.474 common/qat: not in enabled drivers build config 00:33:48.474 common/sfc_efx: not in enabled drivers build config 00:33:48.474 mempool/bucket: not in enabled drivers build config 00:33:48.474 mempool/cnxk: not in enabled drivers build config 00:33:48.474 mempool/dpaa: not in enabled drivers build config 00:33:48.474 mempool/dpaa2: not in enabled drivers build config 00:33:48.474 mempool/octeontx: not in enabled drivers build config 00:33:48.474 mempool/stack: not in enabled drivers build config 00:33:48.474 dma/cnxk: not in enabled drivers build config 00:33:48.474 dma/dpaa: not in enabled drivers build config 00:33:48.474 dma/dpaa2: not in enabled drivers build config 00:33:48.474 dma/hisilicon: not in enabled drivers build config 00:33:48.474 dma/idxd: not in enabled drivers build config 00:33:48.474 dma/ioat: not in enabled drivers build config 00:33:48.474 dma/skeleton: not in enabled drivers build config 00:33:48.474 net/af_packet: not in enabled drivers build config 00:33:48.474 net/af_xdp: not in enabled drivers build config 00:33:48.474 net/ark: not in enabled drivers build config 00:33:48.474 net/atlantic: not in enabled drivers build config 00:33:48.474 net/avp: not in enabled drivers build config 00:33:48.474 net/axgbe: not in enabled drivers build config 00:33:48.474 net/bnx2x: not in enabled drivers build config 00:33:48.474 net/bnxt: not in enabled drivers build config 00:33:48.474 net/bonding: not in enabled drivers build config 00:33:48.474 net/cnxk: not in enabled drivers build config 00:33:48.474 net/cpfl: not in enabled drivers build config 00:33:48.474 net/cxgbe: not in enabled drivers build config 00:33:48.474 net/dpaa: not in enabled drivers build config 00:33:48.474 net/dpaa2: not in enabled drivers build config 00:33:48.474 net/e1000: not in enabled drivers build config 00:33:48.474 net/ena: not in enabled drivers build config 00:33:48.474 net/enetc: not in enabled drivers build config 00:33:48.474 net/enetfec: not in enabled drivers build config 00:33:48.474 net/enic: not in enabled drivers build config 00:33:48.474 net/failsafe: not in enabled drivers build config 00:33:48.474 net/fm10k: not in enabled drivers build config 00:33:48.474 net/gve: not in enabled drivers build config 00:33:48.474 net/hinic: not in enabled drivers build config 00:33:48.474 net/hns3: not in enabled drivers build config 00:33:48.474 net/i40e: not in enabled drivers build config 00:33:48.474 net/iavf: not in enabled drivers build config 00:33:48.474 net/ice: not in enabled drivers build config 00:33:48.474 net/idpf: not in enabled drivers build config 00:33:48.474 net/igc: not in enabled drivers build config 00:33:48.474 net/ionic: not in enabled drivers build config 00:33:48.474 net/ipn3ke: not in enabled drivers build config 00:33:48.474 net/ixgbe: not in enabled drivers build config 00:33:48.474 net/mana: not in enabled drivers build config 00:33:48.474 net/memif: not in enabled drivers build config 00:33:48.474 net/mlx4: not in enabled drivers build config 00:33:48.474 net/mlx5: not in enabled drivers build config 00:33:48.474 net/mvneta: not in enabled drivers build config 00:33:48.474 net/mvpp2: not in enabled drivers build config 00:33:48.474 net/netvsc: not in enabled drivers build config 00:33:48.474 net/nfb: not in enabled drivers build config 00:33:48.474 net/nfp: not in enabled drivers build config 00:33:48.474 net/ngbe: not in enabled drivers build config 00:33:48.474 net/null: not in enabled drivers build config 00:33:48.474 net/octeontx: not in enabled drivers build config 00:33:48.474 net/octeon_ep: not in enabled drivers build config 00:33:48.474 net/pcap: not in enabled drivers build config 00:33:48.474 net/pfe: not in enabled drivers build config 00:33:48.474 net/qede: not in enabled drivers build config 00:33:48.474 net/ring: not in enabled drivers build config 00:33:48.474 net/sfc: not in enabled drivers build config 00:33:48.474 net/softnic: not in enabled drivers build config 00:33:48.474 net/tap: not in enabled drivers build config 00:33:48.474 net/thunderx: not in enabled drivers build config 00:33:48.474 net/txgbe: not in enabled drivers build config 00:33:48.474 net/vdev_netvsc: not in enabled drivers build config 00:33:48.474 net/vhost: not in enabled drivers build config 00:33:48.474 net/virtio: not in enabled drivers build config 00:33:48.474 net/vmxnet3: not in enabled drivers build config 00:33:48.474 raw/*: missing internal dependency, "rawdev" 00:33:48.474 crypto/armv8: not in enabled drivers build config 00:33:48.474 crypto/bcmfs: not in enabled drivers build config 00:33:48.474 crypto/caam_jr: not in enabled drivers build config 00:33:48.474 crypto/ccp: not in enabled drivers build config 00:33:48.474 crypto/cnxk: not in enabled drivers build config 00:33:48.474 crypto/dpaa_sec: not in enabled drivers build config 00:33:48.474 crypto/dpaa2_sec: not in enabled drivers build config 00:33:48.474 crypto/ipsec_mb: not in enabled drivers build config 00:33:48.474 crypto/mlx5: not in enabled drivers build config 00:33:48.474 crypto/mvsam: not in enabled drivers build config 00:33:48.474 crypto/nitrox: not in enabled drivers build config 00:33:48.474 crypto/null: not in enabled drivers build config 00:33:48.474 crypto/octeontx: not in enabled drivers build config 00:33:48.474 crypto/openssl: not in enabled drivers build config 00:33:48.474 crypto/scheduler: not in enabled drivers build config 00:33:48.474 crypto/uadk: not in enabled drivers build config 00:33:48.474 crypto/virtio: not in enabled drivers build config 00:33:48.474 compress/isal: not in enabled drivers build config 00:33:48.474 compress/mlx5: not in enabled drivers build config 00:33:48.474 compress/octeontx: not in enabled drivers build config 00:33:48.474 compress/zlib: not in enabled drivers build config 00:33:48.474 regex/*: missing internal dependency, "regexdev" 00:33:48.474 ml/*: missing internal dependency, "mldev" 00:33:48.474 vdpa/ifc: not in enabled drivers build config 00:33:48.474 vdpa/mlx5: not in enabled drivers build config 00:33:48.474 vdpa/nfp: not in enabled drivers build config 00:33:48.474 vdpa/sfc: not in enabled drivers build config 00:33:48.474 event/*: missing internal dependency, "eventdev" 00:33:48.474 baseband/*: missing internal dependency, "bbdev" 00:33:48.474 gpu/*: missing internal dependency, "gpudev" 00:33:48.474 00:33:48.474 00:33:48.732 Build targets in project: 85 00:33:48.732 00:33:48.732 DPDK 23.11.0 00:33:48.732 00:33:48.732 User defined options 00:33:48.732 default_library : static 00:33:48.732 libdir : lib 00:33:48.732 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:33:48.732 b_lto : true 00:33:48.732 b_sanitize : address 00:33:48.732 c_args : -fPIC -Werror 00:33:48.732 c_link_args : 00:33:48.732 cpu_instruction_set: native 00:33:48.732 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:33:48.732 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:33:48.732 enable_docs : false 00:33:48.732 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:33:48.732 enable_kmods : false 00:33:48.732 tests : false 00:33:48.732 00:33:48.732 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:33:49.296 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:33:49.296 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:33:49.296 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:33:49.296 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:33:49.296 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:33:49.296 [5/264] Linking static target lib/librte_kvargs.a 00:33:49.296 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:33:49.554 [7/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:33:49.554 [8/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:33:49.554 [9/264] Linking static target lib/librte_log.a 00:33:49.554 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:33:49.554 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:33:49.554 [12/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:33:49.554 [13/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:33:49.554 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:33:49.812 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:33:49.812 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:33:50.070 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:33:50.070 [18/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:33:50.070 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:33:50.070 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:33:50.070 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:33:50.070 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:33:50.070 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:33:50.327 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:33:50.327 [25/264] Linking target lib/librte_log.so.24.0 00:33:50.585 [26/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:33:50.585 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:33:50.585 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:33:50.585 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:33:50.585 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:33:50.585 [31/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:33:50.585 [32/264] Linking static target lib/librte_telemetry.a 00:33:50.585 [33/264] Linking target lib/librte_kvargs.so.24.0 00:33:50.585 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:33:50.585 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:33:50.585 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:33:50.842 [37/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:33:50.842 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:33:50.842 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:33:50.842 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:33:50.842 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:33:50.842 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:33:51.099 [43/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:33:51.099 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:33:51.099 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:33:51.356 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:33:51.356 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:33:51.356 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:33:51.356 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:33:51.356 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:33:51.613 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:33:51.613 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:33:51.613 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:33:51.613 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:33:51.613 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:33:51.613 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:33:51.613 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:33:51.872 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:33:51.872 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:33:51.872 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:33:51.872 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:33:51.872 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:33:51.872 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:33:52.130 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:33:52.130 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:33:52.130 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:33:52.130 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:33:52.388 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:33:52.388 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:33:52.388 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:33:52.388 [71/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:33:52.388 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:33:52.388 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:33:52.388 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:33:52.388 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:33:52.388 [76/264] Linking target lib/librte_telemetry.so.24.0 00:33:52.646 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:33:52.646 [78/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:33:52.646 [79/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:33:52.905 [80/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:33:52.905 [81/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:33:52.905 [82/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:33:52.905 [83/264] Linking static target lib/librte_ring.a 00:33:52.905 [84/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:33:52.905 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:33:53.163 [86/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:33:53.163 [87/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:33:53.163 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:33:53.163 [89/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:33:53.422 [90/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:33:53.422 [91/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:33:53.422 [92/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:33:53.422 [93/264] Linking static target lib/librte_eal.a 00:33:53.680 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:33:53.681 [95/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:33:53.681 [96/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:33:53.681 [97/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:33:53.681 [98/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:33:53.681 [99/264] Linking static target lib/librte_mempool.a 00:33:53.681 [100/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:33:53.681 [101/264] Linking static target lib/librte_rcu.a 00:33:53.939 [102/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:33:53.939 [103/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:33:53.939 [104/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:33:54.198 [105/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:33:54.198 [106/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:33:54.198 [107/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:33:54.198 [108/264] Linking static target lib/librte_meter.a 00:33:54.198 [109/264] Linking static target lib/librte_net.a 00:33:54.198 [110/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:33:54.457 [111/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:33:54.457 [112/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:33:54.457 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:33:54.457 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:33:54.457 [115/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:33:54.727 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:33:55.000 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:33:55.000 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:33:55.258 [119/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:33:55.258 [120/264] Linking static target lib/librte_mbuf.a 00:33:55.258 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:33:55.516 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:33:55.516 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:33:55.516 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:33:55.516 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:33:55.516 [126/264] Linking static target lib/librte_pci.a 00:33:55.775 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:33:55.775 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:33:55.775 [129/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:33:55.775 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:33:55.775 [131/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:33:55.775 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:33:56.034 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:33:56.034 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:33:56.034 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:33:56.034 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:33:56.034 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:33:56.034 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:33:56.034 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:33:56.034 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:33:56.306 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:33:56.307 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:33:56.307 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:33:56.307 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:33:56.569 [145/264] Linking static target lib/librte_cmdline.a 00:33:56.569 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:33:56.569 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:33:56.826 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:33:56.826 [149/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:33:56.826 [150/264] Linking static target lib/librte_timer.a 00:33:57.083 [151/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:33:57.083 [152/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:33:57.083 [153/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:33:57.083 [154/264] Linking static target lib/librte_compressdev.a 00:33:57.084 [155/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:33:57.341 [156/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:33:57.341 [157/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:33:57.341 [158/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:33:57.341 [159/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:33:57.599 [160/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:33:57.599 [161/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:33:57.599 [162/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:33:57.599 [163/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:33:58.165 [164/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:33:58.165 [165/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:33:58.165 [166/264] Linking static target lib/librte_dmadev.a 00:33:58.165 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:33:58.165 [168/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:33:58.165 [169/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:33:58.423 [170/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:33:58.423 [171/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:33:58.423 [172/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:33:58.423 [173/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:33:58.681 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:33:58.940 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:33:58.940 [176/264] Linking static target lib/librte_power.a 00:33:58.940 [177/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:33:58.940 [178/264] Linking static target lib/librte_reorder.a 00:33:58.940 [179/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:33:58.940 [180/264] Linking static target lib/librte_security.a 00:33:59.199 [181/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:33:59.199 [182/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:33:59.199 [183/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:33:59.457 [184/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:33:59.457 [185/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:33:59.457 [186/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:34:00.025 [187/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:34:00.025 [188/264] Linking static target lib/librte_cryptodev.a 00:34:00.283 [189/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:34:00.283 [190/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:34:00.283 [191/264] Linking static target lib/librte_ethdev.a 00:34:00.283 [192/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:34:00.283 [193/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:34:00.283 [194/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:34:00.848 [195/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:34:00.848 [196/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:34:00.848 [197/264] Linking static target lib/librte_hash.a 00:34:00.848 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:34:01.106 [199/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:34:01.364 [200/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:34:01.364 [201/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:34:01.364 [202/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:34:01.364 [203/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:01.931 [204/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:34:01.931 [205/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:34:01.931 [206/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:34:01.931 [207/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:34:01.931 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:34:01.931 [209/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:34:01.931 [210/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:01.931 [211/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:34:01.931 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:34:01.931 [213/264] Linking static target drivers/librte_bus_vdev.a 00:34:01.931 [214/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:01.931 [215/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:34:01.931 [216/264] Linking static target drivers/librte_bus_pci.a 00:34:02.189 [217/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:02.190 [218/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:34:02.190 [219/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:34:02.448 [220/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:34:02.448 [221/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:02.448 [222/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:34:02.448 [223/264] Linking static target drivers/librte_mempool_ring.a 00:34:02.448 [224/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:34:05.733 [225/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:34:12.314 [226/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:34:12.314 [227/264] Linking target lib/librte_eal.so.24.0 00:34:12.314 [228/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:34:12.314 [229/264] Linking target lib/librte_pci.so.24.0 00:34:12.314 [230/264] Linking target lib/librte_meter.so.24.0 00:34:12.314 [231/264] Linking target lib/librte_ring.so.24.0 00:34:12.314 [232/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:34:12.314 [233/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:34:12.314 [234/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:34:12.314 [235/264] Linking target drivers/librte_bus_vdev.so.24.0 00:34:12.314 [236/264] Linking target lib/librte_timer.so.24.0 00:34:12.314 [237/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:34:12.314 [238/264] Linking target lib/librte_dmadev.so.24.0 00:34:12.597 [239/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:34:12.866 [240/264] Linking target lib/librte_mempool.so.24.0 00:34:12.866 [241/264] Linking target lib/librte_rcu.so.24.0 00:34:13.123 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:34:13.123 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:34:13.690 [244/264] Linking target drivers/librte_bus_pci.so.24.0 00:34:13.690 [245/264] Linking target drivers/librte_mempool_ring.so.24.0 00:34:15.066 [246/264] Linking target lib/librte_mbuf.so.24.0 00:34:15.066 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:34:15.324 [248/264] Linking target lib/librte_reorder.so.24.0 00:34:15.583 [249/264] Linking target lib/librte_compressdev.so.24.0 00:34:16.150 [250/264] Linking target lib/librte_net.so.24.0 00:34:16.150 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:34:17.544 [252/264] Linking target lib/librte_cmdline.so.24.0 00:34:17.544 [253/264] Linking target lib/librte_cryptodev.so.24.0 00:34:17.544 [254/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:34:18.110 [255/264] Linking target lib/librte_security.so.24.0 00:34:20.640 [256/264] Linking target lib/librte_hash.so.24.0 00:34:20.640 [257/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:34:27.202 [258/264] Linking target lib/librte_ethdev.so.24.0 00:34:27.460 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:34:29.989 [260/264] Linking target lib/librte_power.so.24.0 00:34:34.189 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:34:34.189 [262/264] Linking static target lib/librte_vhost.a 00:34:35.565 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:35:22.224 [264/264] Linking target lib/librte_vhost.so.24.0 00:35:22.224 INFO: autodetecting backend as ninja 00:35:22.224 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:35:22.224 CC lib/ut_mock/mock.o 00:35:22.224 CC lib/log/log.o 00:35:22.224 CC lib/log/log_flags.o 00:35:22.224 CC lib/log/log_deprecated.o 00:35:22.224 CC lib/ut/ut.o 00:35:22.224 LIB libspdk_ut_mock.a 00:35:22.224 LIB libspdk_log.a 00:35:22.224 LIB libspdk_ut.a 00:35:22.224 CC lib/util/base64.o 00:35:22.224 CC lib/util/bit_array.o 00:35:22.224 CC lib/dma/dma.o 00:35:22.224 CC lib/util/crc32.o 00:35:22.224 CC lib/util/cpuset.o 00:35:22.224 CC lib/util/crc16.o 00:35:22.224 CC lib/util/crc32c.o 00:35:22.224 CC lib/ioat/ioat.o 00:35:22.224 CXX lib/trace_parser/trace.o 00:35:22.224 CC lib/vfio_user/host/vfio_user_pci.o 00:35:22.224 CC lib/vfio_user/host/vfio_user.o 00:35:22.224 CC lib/util/crc32_ieee.o 00:35:22.224 CC lib/util/crc64.o 00:35:22.224 CC lib/util/dif.o 00:35:22.224 LIB libspdk_dma.a 00:35:22.224 CC lib/util/fd.o 00:35:22.224 CC lib/util/file.o 00:35:22.224 CC lib/util/hexlify.o 00:35:22.224 LIB libspdk_ioat.a 00:35:22.224 CC lib/util/iov.o 00:35:22.224 CC lib/util/math.o 00:35:22.224 CC lib/util/pipe.o 00:35:22.224 CC lib/util/strerror_tls.o 00:35:22.224 LIB libspdk_vfio_user.a 00:35:22.224 CC lib/util/string.o 00:35:22.224 CC lib/util/uuid.o 00:35:22.224 CC lib/util/fd_group.o 00:35:22.224 CC lib/util/xor.o 00:35:22.224 CC lib/util/zipf.o 00:35:22.483 LIB libspdk_util.a 00:35:22.483 LIB libspdk_trace_parser.a 00:35:22.483 CC lib/vmd/vmd.o 00:35:22.483 CC lib/vmd/led.o 00:35:22.483 CC lib/env_dpdk/env.o 00:35:22.483 CC lib/env_dpdk/memory.o 00:35:22.483 CC lib/env_dpdk/pci.o 00:35:22.483 CC lib/env_dpdk/init.o 00:35:22.483 CC lib/rdma/common.o 00:35:22.483 CC lib/conf/conf.o 00:35:22.483 CC lib/idxd/idxd.o 00:35:22.483 CC lib/json/json_parse.o 00:35:22.483 CC lib/json/json_util.o 00:35:22.742 LIB libspdk_conf.a 00:35:22.742 CC lib/json/json_write.o 00:35:22.742 CC lib/env_dpdk/threads.o 00:35:22.742 CC lib/rdma/rdma_verbs.o 00:35:22.742 CC lib/env_dpdk/pci_ioat.o 00:35:22.742 CC lib/env_dpdk/pci_virtio.o 00:35:22.742 CC lib/idxd/idxd_user.o 00:35:22.742 CC lib/env_dpdk/pci_vmd.o 00:35:22.742 CC lib/env_dpdk/pci_idxd.o 00:35:22.742 LIB libspdk_vmd.a 00:35:23.000 CC lib/env_dpdk/pci_event.o 00:35:23.000 LIB libspdk_json.a 00:35:23.000 CC lib/env_dpdk/sigbus_handler.o 00:35:23.000 CC lib/env_dpdk/pci_dpdk.o 00:35:23.000 CC lib/env_dpdk/pci_dpdk_2207.o 00:35:23.000 LIB libspdk_rdma.a 00:35:23.000 CC lib/env_dpdk/pci_dpdk_2211.o 00:35:23.000 LIB libspdk_idxd.a 00:35:23.000 CC lib/jsonrpc/jsonrpc_server.o 00:35:23.000 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:35:23.000 CC lib/jsonrpc/jsonrpc_client.o 00:35:23.000 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:35:23.259 LIB libspdk_jsonrpc.a 00:35:23.259 CC lib/rpc/rpc.o 00:35:23.517 LIB libspdk_env_dpdk.a 00:35:23.517 LIB libspdk_rpc.a 00:35:23.517 CC lib/sock/sock_rpc.o 00:35:23.517 CC lib/sock/sock.o 00:35:23.517 CC lib/notify/notify.o 00:35:23.517 CC lib/notify/notify_rpc.o 00:35:23.517 CC lib/trace/trace.o 00:35:23.517 CC lib/trace/trace_flags.o 00:35:23.517 CC lib/trace/trace_rpc.o 00:35:23.776 LIB libspdk_notify.a 00:35:23.776 LIB libspdk_trace.a 00:35:23.776 LIB libspdk_sock.a 00:35:23.776 CC lib/thread/thread.o 00:35:23.776 CC lib/thread/iobuf.o 00:35:24.035 CC lib/nvme/nvme_ctrlr_cmd.o 00:35:24.035 CC lib/nvme/nvme_ctrlr.o 00:35:24.035 CC lib/nvme/nvme_fabric.o 00:35:24.035 CC lib/nvme/nvme_ns_cmd.o 00:35:24.035 CC lib/nvme/nvme_ns.o 00:35:24.035 CC lib/nvme/nvme_pcie_common.o 00:35:24.035 CC lib/nvme/nvme_pcie.o 00:35:24.035 CC lib/nvme/nvme_qpair.o 00:35:24.035 CC lib/nvme/nvme.o 00:35:24.294 CC lib/nvme/nvme_quirks.o 00:35:24.553 CC lib/nvme/nvme_transport.o 00:35:24.553 CC lib/nvme/nvme_discovery.o 00:35:24.553 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:35:24.553 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:35:24.553 CC lib/nvme/nvme_tcp.o 00:35:24.553 CC lib/nvme/nvme_opal.o 00:35:24.811 LIB libspdk_thread.a 00:35:24.811 CC lib/nvme/nvme_io_msg.o 00:35:24.811 CC lib/nvme/nvme_poll_group.o 00:35:24.811 CC lib/nvme/nvme_zns.o 00:35:25.069 CC lib/accel/accel.o 00:35:25.069 CC lib/nvme/nvme_cuse.o 00:35:25.069 CC lib/accel/accel_rpc.o 00:35:25.069 CC lib/init/json_config.o 00:35:25.069 CC lib/blob/blobstore.o 00:35:25.069 CC lib/init/subsystem.o 00:35:25.069 CC lib/init/subsystem_rpc.o 00:35:25.069 CC lib/init/rpc.o 00:35:25.328 CC lib/blob/request.o 00:35:25.328 CC lib/blob/zeroes.o 00:35:25.328 CC lib/blob/blob_bs_dev.o 00:35:25.328 CC lib/accel/accel_sw.o 00:35:25.328 LIB libspdk_init.a 00:35:25.328 CC lib/nvme/nvme_vfio_user.o 00:35:25.328 CC lib/nvme/nvme_rdma.o 00:35:25.328 CC lib/virtio/virtio.o 00:35:25.328 CC lib/virtio/virtio_vhost_user.o 00:35:25.328 CC lib/virtio/virtio_vfio_user.o 00:35:25.586 CC lib/virtio/virtio_pci.o 00:35:25.586 CC lib/event/app.o 00:35:25.586 CC lib/event/reactor.o 00:35:25.586 CC lib/event/log_rpc.o 00:35:25.586 CC lib/event/app_rpc.o 00:35:25.586 CC lib/event/scheduler_static.o 00:35:25.586 LIB libspdk_accel.a 00:35:25.844 LIB libspdk_virtio.a 00:35:25.844 CC lib/bdev/bdev.o 00:35:25.844 CC lib/bdev/bdev_zone.o 00:35:25.844 CC lib/bdev/scsi_nvme.o 00:35:25.844 CC lib/bdev/bdev_rpc.o 00:35:25.844 CC lib/bdev/part.o 00:35:25.844 LIB libspdk_event.a 00:35:26.103 LIB libspdk_nvme.a 00:35:26.672 LIB libspdk_blob.a 00:35:26.672 CC lib/blobfs/tree.o 00:35:26.672 CC lib/blobfs/blobfs.o 00:35:26.672 CC lib/lvol/lvol.o 00:35:27.237 LIB libspdk_blobfs.a 00:35:27.237 LIB libspdk_bdev.a 00:35:27.237 CC lib/nvmf/ctrlr.o 00:35:27.237 CC lib/nvmf/ctrlr_bdev.o 00:35:27.237 CC lib/nvmf/ctrlr_discovery.o 00:35:27.237 CC lib/nvmf/subsystem.o 00:35:27.237 CC lib/nvmf/nvmf.o 00:35:27.237 CC lib/nvmf/nvmf_rpc.o 00:35:27.237 CC lib/scsi/dev.o 00:35:27.237 CC lib/nbd/nbd.o 00:35:27.237 CC lib/ftl/ftl_core.o 00:35:27.237 LIB libspdk_lvol.a 00:35:27.237 CC lib/nbd/nbd_rpc.o 00:35:27.495 CC lib/nvmf/transport.o 00:35:27.495 CC lib/scsi/lun.o 00:35:27.495 CC lib/scsi/port.o 00:35:27.495 LIB libspdk_nbd.a 00:35:27.495 CC lib/scsi/scsi.o 00:35:27.495 CC lib/scsi/scsi_bdev.o 00:35:27.495 CC lib/ftl/ftl_init.o 00:35:27.495 CC lib/nvmf/tcp.o 00:35:27.753 CC lib/nvmf/rdma.o 00:35:27.753 CC lib/ftl/ftl_layout.o 00:35:27.753 CC lib/scsi/scsi_pr.o 00:35:27.753 CC lib/scsi/scsi_rpc.o 00:35:27.753 CC lib/scsi/task.o 00:35:27.753 CC lib/ftl/ftl_debug.o 00:35:27.753 CC lib/ftl/ftl_io.o 00:35:27.753 CC lib/ftl/ftl_sb.o 00:35:27.753 CC lib/ftl/ftl_l2p.o 00:35:28.011 CC lib/ftl/ftl_l2p_flat.o 00:35:28.011 CC lib/ftl/ftl_nv_cache.o 00:35:28.011 LIB libspdk_scsi.a 00:35:28.011 CC lib/ftl/ftl_band.o 00:35:28.011 CC lib/ftl/ftl_band_ops.o 00:35:28.011 CC lib/ftl/ftl_writer.o 00:35:28.011 CC lib/ftl/ftl_rq.o 00:35:28.011 CC lib/ftl/ftl_reloc.o 00:35:28.269 CC lib/iscsi/conn.o 00:35:28.269 CC lib/ftl/ftl_l2p_cache.o 00:35:28.269 CC lib/iscsi/init_grp.o 00:35:28.269 CC lib/iscsi/iscsi.o 00:35:28.269 CC lib/iscsi/md5.o 00:35:28.269 CC lib/vhost/vhost.o 00:35:28.269 CC lib/vhost/vhost_rpc.o 00:35:28.269 CC lib/iscsi/param.o 00:35:28.527 CC lib/ftl/ftl_p2l.o 00:35:28.527 CC lib/iscsi/portal_grp.o 00:35:28.527 CC lib/iscsi/tgt_node.o 00:35:28.527 CC lib/iscsi/iscsi_subsystem.o 00:35:28.527 CC lib/iscsi/iscsi_rpc.o 00:35:28.527 CC lib/ftl/mngt/ftl_mngt.o 00:35:28.527 LIB libspdk_nvmf.a 00:35:28.527 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:35:28.798 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:35:28.798 CC lib/ftl/mngt/ftl_mngt_startup.o 00:35:28.798 CC lib/ftl/mngt/ftl_mngt_md.o 00:35:28.798 CC lib/vhost/vhost_scsi.o 00:35:28.798 CC lib/iscsi/task.o 00:35:28.798 CC lib/vhost/vhost_blk.o 00:35:28.798 CC lib/ftl/mngt/ftl_mngt_misc.o 00:35:28.798 CC lib/vhost/rte_vhost_user.o 00:35:28.798 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:35:28.798 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:35:28.798 CC lib/ftl/mngt/ftl_mngt_band.o 00:35:29.080 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:35:29.080 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:35:29.080 LIB libspdk_iscsi.a 00:35:29.080 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:35:29.080 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:35:29.081 CC lib/ftl/utils/ftl_conf.o 00:35:29.081 CC lib/ftl/utils/ftl_md.o 00:35:29.081 CC lib/ftl/utils/ftl_mempool.o 00:35:29.081 CC lib/ftl/utils/ftl_bitmap.o 00:35:29.081 CC lib/ftl/utils/ftl_property.o 00:35:29.351 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:35:29.351 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:35:29.351 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:35:29.351 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:35:29.351 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:35:29.351 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:35:29.351 CC lib/ftl/upgrade/ftl_sb_v3.o 00:35:29.351 CC lib/ftl/upgrade/ftl_sb_v5.o 00:35:29.351 CC lib/ftl/nvc/ftl_nvc_dev.o 00:35:29.351 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:35:29.351 CC lib/ftl/base/ftl_base_dev.o 00:35:29.351 CC lib/ftl/base/ftl_base_bdev.o 00:35:29.610 LIB libspdk_vhost.a 00:35:29.610 LIB libspdk_ftl.a 00:35:29.868 CC module/env_dpdk/env_dpdk_rpc.o 00:35:29.868 CC module/accel/ioat/accel_ioat.o 00:35:29.868 CC module/accel/dsa/accel_dsa.o 00:35:29.868 CC module/accel/error/accel_error.o 00:35:29.868 CC module/scheduler/dynamic/scheduler_dynamic.o 00:35:29.868 CC module/sock/posix/posix.o 00:35:29.868 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:35:29.868 CC module/accel/iaa/accel_iaa.o 00:35:29.868 CC module/blob/bdev/blob_bdev.o 00:35:29.868 CC module/scheduler/gscheduler/gscheduler.o 00:35:29.868 LIB libspdk_env_dpdk_rpc.a 00:35:29.868 CC module/accel/iaa/accel_iaa_rpc.o 00:35:29.868 CC module/accel/ioat/accel_ioat_rpc.o 00:35:29.869 LIB libspdk_scheduler_dpdk_governor.a 00:35:29.869 CC module/accel/error/accel_error_rpc.o 00:35:29.869 LIB libspdk_scheduler_gscheduler.a 00:35:30.127 CC module/accel/dsa/accel_dsa_rpc.o 00:35:30.127 LIB libspdk_scheduler_dynamic.a 00:35:30.127 LIB libspdk_blob_bdev.a 00:35:30.127 LIB libspdk_accel_iaa.a 00:35:30.127 LIB libspdk_accel_ioat.a 00:35:30.127 LIB libspdk_accel_error.a 00:35:30.127 LIB libspdk_accel_dsa.a 00:35:30.127 CC module/bdev/delay/vbdev_delay.o 00:35:30.127 CC module/bdev/lvol/vbdev_lvol.o 00:35:30.127 CC module/blobfs/bdev/blobfs_bdev.o 00:35:30.127 CC module/bdev/gpt/gpt.o 00:35:30.127 CC module/bdev/malloc/bdev_malloc.o 00:35:30.127 CC module/bdev/null/bdev_null.o 00:35:30.127 CC module/bdev/nvme/bdev_nvme.o 00:35:30.127 CC module/bdev/error/vbdev_error.o 00:35:30.386 CC module/bdev/passthru/vbdev_passthru.o 00:35:30.386 LIB libspdk_sock_posix.a 00:35:30.386 CC module/bdev/gpt/vbdev_gpt.o 00:35:30.386 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:35:30.386 CC module/bdev/nvme/bdev_nvme_rpc.o 00:35:30.386 CC module/bdev/null/bdev_null_rpc.o 00:35:30.386 CC module/bdev/error/vbdev_error_rpc.o 00:35:30.386 CC module/bdev/delay/vbdev_delay_rpc.o 00:35:30.386 CC module/bdev/malloc/bdev_malloc_rpc.o 00:35:30.386 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:35:30.386 LIB libspdk_blobfs_bdev.a 00:35:30.645 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:35:30.645 LIB libspdk_bdev_null.a 00:35:30.645 LIB libspdk_bdev_gpt.a 00:35:30.645 LIB libspdk_bdev_error.a 00:35:30.645 CC module/bdev/nvme/nvme_rpc.o 00:35:30.645 CC module/bdev/nvme/bdev_mdns_client.o 00:35:30.645 LIB libspdk_bdev_delay.a 00:35:30.645 LIB libspdk_bdev_malloc.a 00:35:30.645 LIB libspdk_bdev_passthru.a 00:35:30.645 CC module/bdev/raid/bdev_raid.o 00:35:30.645 CC module/bdev/raid/bdev_raid_rpc.o 00:35:30.645 CC module/bdev/raid/bdev_raid_sb.o 00:35:30.645 CC module/bdev/split/vbdev_split.o 00:35:30.645 CC module/bdev/split/vbdev_split_rpc.o 00:35:30.645 CC module/bdev/zone_block/vbdev_zone_block.o 00:35:30.645 CC module/bdev/nvme/vbdev_opal.o 00:35:30.645 LIB libspdk_bdev_lvol.a 00:35:30.904 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:35:30.904 CC module/bdev/raid/raid0.o 00:35:30.904 CC module/bdev/nvme/vbdev_opal_rpc.o 00:35:30.904 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:35:30.904 LIB libspdk_bdev_split.a 00:35:30.904 CC module/bdev/aio/bdev_aio.o 00:35:30.904 CC module/bdev/aio/bdev_aio_rpc.o 00:35:30.904 CC module/bdev/raid/raid1.o 00:35:30.904 CC module/bdev/ftl/bdev_ftl.o 00:35:30.904 LIB libspdk_bdev_zone_block.a 00:35:30.904 CC module/bdev/raid/concat.o 00:35:30.904 CC module/bdev/raid/raid5f.o 00:35:31.162 CC module/bdev/ftl/bdev_ftl_rpc.o 00:35:31.162 CC module/bdev/iscsi/bdev_iscsi.o 00:35:31.162 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:35:31.162 LIB libspdk_bdev_aio.a 00:35:31.162 CC module/bdev/virtio/bdev_virtio_scsi.o 00:35:31.162 CC module/bdev/virtio/bdev_virtio_blk.o 00:35:31.162 CC module/bdev/virtio/bdev_virtio_rpc.o 00:35:31.162 LIB libspdk_bdev_ftl.a 00:35:31.421 LIB libspdk_bdev_raid.a 00:35:31.421 LIB libspdk_bdev_iscsi.a 00:35:31.421 LIB libspdk_bdev_nvme.a 00:35:31.421 LIB libspdk_bdev_virtio.a 00:35:31.680 CC module/event/subsystems/iobuf/iobuf.o 00:35:31.680 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:35:31.680 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:35:31.680 CC module/event/subsystems/scheduler/scheduler.o 00:35:31.680 CC module/event/subsystems/sock/sock.o 00:35:31.680 CC module/event/subsystems/vmd/vmd.o 00:35:31.680 CC module/event/subsystems/vmd/vmd_rpc.o 00:35:31.680 LIB libspdk_event_vhost_blk.a 00:35:31.680 LIB libspdk_event_sock.a 00:35:31.938 LIB libspdk_event_scheduler.a 00:35:31.938 LIB libspdk_event_vmd.a 00:35:31.938 LIB libspdk_event_iobuf.a 00:35:31.938 CC module/event/subsystems/accel/accel.o 00:35:31.938 LIB libspdk_event_accel.a 00:35:32.197 CC module/event/subsystems/bdev/bdev.o 00:35:32.197 LIB libspdk_event_bdev.a 00:35:32.456 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:35:32.456 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:35:32.456 CC module/event/subsystems/scsi/scsi.o 00:35:32.456 CC module/event/subsystems/nbd/nbd.o 00:35:32.456 LIB libspdk_event_nbd.a 00:35:32.456 LIB libspdk_event_scsi.a 00:35:32.714 LIB libspdk_event_nvmf.a 00:35:32.714 CC module/event/subsystems/iscsi/iscsi.o 00:35:32.714 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:35:32.714 LIB libspdk_event_vhost_scsi.a 00:35:32.972 LIB libspdk_event_iscsi.a 00:35:32.972 CXX app/trace/trace.o 00:35:32.972 CC app/spdk_nvme_perf/perf.o 00:35:32.972 CC app/trace_record/trace_record.o 00:35:32.972 CC app/spdk_nvme_identify/identify.o 00:35:32.972 CC app/spdk_lspci/spdk_lspci.o 00:35:32.972 CC app/nvmf_tgt/nvmf_main.o 00:35:32.972 CC examples/accel/perf/accel_perf.o 00:35:32.972 CC app/iscsi_tgt/iscsi_tgt.o 00:35:32.972 CC app/spdk_tgt/spdk_tgt.o 00:35:33.230 CC test/accel/dif/dif.o 00:35:33.230 LINK spdk_lspci 00:35:33.230 LINK spdk_trace_record 00:35:33.230 LINK nvmf_tgt 00:35:33.230 LINK iscsi_tgt 00:35:33.230 LINK spdk_tgt 00:35:33.488 LINK accel_perf 00:35:33.488 LINK dif 00:35:33.488 LINK spdk_trace 00:35:33.488 LINK spdk_nvme_identify 00:35:33.488 LINK spdk_nvme_perf 00:35:45.681 CC app/spdk_nvme_discover/discovery_aer.o 00:35:45.681 LINK spdk_nvme_discover 00:35:53.786 CC app/spdk_top/spdk_top.o 00:35:59.050 LINK spdk_top 00:36:25.598 CC app/vhost/vhost.o 00:36:26.163 LINK vhost 00:36:26.422 CC examples/bdev/hello_world/hello_bdev.o 00:36:28.320 LINK hello_bdev 00:36:32.501 CC app/spdk_dd/spdk_dd.o 00:36:33.878 LINK spdk_dd 00:36:35.285 CC test/app/bdev_svc/bdev_svc.o 00:36:35.850 LINK bdev_svc 00:36:36.415 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:36:38.311 LINK nvme_fuzz 00:36:44.868 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:36:51.431 LINK iscsi_fuzz 00:38:12.880 CC test/app/histogram_perf/histogram_perf.o 00:38:12.880 LINK histogram_perf 00:38:12.880 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:38:12.880 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:38:15.420 LINK vhost_fuzz 00:38:25.394 CC test/app/jsoncat/jsoncat.o 00:38:25.394 LINK jsoncat 00:38:43.478 CC test/bdev/bdevio/bdevio.o 00:38:43.737 LINK bdevio 00:38:45.641 CC test/blobfs/mkfs/mkfs.o 00:38:47.016 LINK mkfs 00:38:55.155 TEST_HEADER include/spdk/config.h 00:38:55.155 CXX test/cpp_headers/accel_module.o 00:38:55.155 CXX test/cpp_headers/bit_pool.o 00:38:56.091 CXX test/cpp_headers/ioat.o 00:38:57.038 CXX test/cpp_headers/blobfs.o 00:38:57.296 CC test/dma/test_dma/test_dma.o 00:38:58.232 CXX test/cpp_headers/notify.o 00:38:58.800 CXX test/cpp_headers/pipe.o 00:38:59.368 LINK test_dma 00:38:59.625 CXX test/cpp_headers/accel.o 00:39:01.001 CXX test/cpp_headers/file.o 00:39:01.569 CXX test/cpp_headers/version.o 00:39:01.828 CXX test/cpp_headers/trace_parser.o 00:39:02.763 CXX test/cpp_headers/opal_spec.o 00:39:03.022 CXX test/cpp_headers/uuid.o 00:39:04.397 CXX test/cpp_headers/likely.o 00:39:04.655 CC examples/bdev/bdevperf/bdevperf.o 00:39:05.223 CXX test/cpp_headers/dif.o 00:39:06.598 CXX test/cpp_headers/memory.o 00:39:07.973 CXX test/cpp_headers/vfio_user_pci.o 00:39:07.973 LINK bdevperf 00:39:08.949 CXX test/cpp_headers/dma.o 00:39:09.887 CXX test/cpp_headers/nbd.o 00:39:09.887 CXX test/cpp_headers/conf.o 00:39:11.264 CXX test/cpp_headers/env_dpdk.o 00:39:12.199 CXX test/cpp_headers/nvmf_spec.o 00:39:13.575 CXX test/cpp_headers/iscsi_spec.o 00:39:14.510 CXX test/cpp_headers/mmio.o 00:39:14.510 CXX test/cpp_headers/json.o 00:39:15.445 CXX test/cpp_headers/opal.o 00:39:16.823 CC test/env/mem_callbacks/mem_callbacks.o 00:39:16.823 CXX test/cpp_headers/bdev.o 00:39:18.198 CXX test/cpp_headers/base64.o 00:39:18.764 LINK mem_callbacks 00:39:18.764 CXX test/cpp_headers/blobfs_bdev.o 00:39:19.023 CXX test/cpp_headers/nvme_ocssd.o 00:39:19.959 CXX test/cpp_headers/fd.o 00:39:19.959 CC test/env/vtophys/vtophys.o 00:39:20.527 LINK vtophys 00:39:20.787 CXX test/cpp_headers/barrier.o 00:39:21.723 CXX test/cpp_headers/scsi_spec.o 00:39:21.981 CXX test/cpp_headers/zipf.o 00:39:22.916 CXX test/cpp_headers/nvmf.o 00:39:23.484 CC examples/blob/hello_world/hello_blob.o 00:39:24.076 CXX test/cpp_headers/queue.o 00:39:24.076 CXX test/cpp_headers/xor.o 00:39:24.668 LINK hello_blob 00:39:24.926 CC test/app/stub/stub.o 00:39:25.184 CXX test/cpp_headers/cpuset.o 00:39:26.117 LINK stub 00:39:26.377 CXX test/cpp_headers/thread.o 00:39:26.637 CC app/fio/nvme/fio_plugin.o 00:39:27.570 CXX test/cpp_headers/bdev_zone.o 00:39:28.944 CXX test/cpp_headers/fd_group.o 00:39:29.202 LINK spdk_nvme 00:39:29.768 CXX test/cpp_headers/tree.o 00:39:30.027 CXX test/cpp_headers/blob_bdev.o 00:39:31.929 CXX test/cpp_headers/crc64.o 00:39:32.866 CXX test/cpp_headers/assert.o 00:39:34.243 CXX test/cpp_headers/nvme_spec.o 00:39:35.619 CXX test/cpp_headers/endian.o 00:39:36.554 CXX test/cpp_headers/pci_ids.o 00:39:37.928 CXX test/cpp_headers/log.o 00:39:39.304 CXX test/cpp_headers/nvme_ocssd_spec.o 00:39:40.704 CXX test/cpp_headers/ftl.o 00:39:42.606 CXX test/cpp_headers/config.o 00:39:42.606 CXX test/cpp_headers/vhost.o 00:39:43.980 CXX test/cpp_headers/bdev_module.o 00:39:45.356 CXX test/cpp_headers/nvme_intel.o 00:39:46.731 CXX test/cpp_headers/idxd_spec.o 00:39:47.668 CXX test/cpp_headers/crc16.o 00:39:48.235 CXX test/cpp_headers/nvme.o 00:39:48.802 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:39:49.061 CC test/env/memory/memory_ut.o 00:39:49.629 CXX test/cpp_headers/stdinc.o 00:39:50.197 LINK env_dpdk_post_init 00:39:50.765 CXX test/cpp_headers/scsi.o 00:39:52.668 CXX test/cpp_headers/nvmf_fc_spec.o 00:39:53.605 CXX test/cpp_headers/idxd.o 00:39:53.864 LINK memory_ut 00:39:55.237 CXX test/cpp_headers/hexlify.o 00:39:56.612 CXX test/cpp_headers/reduce.o 00:39:58.014 CXX test/cpp_headers/crc32.o 00:39:59.390 CXX test/cpp_headers/init.o 00:40:00.327 CXX test/cpp_headers/nvmf_transport.o 00:40:02.860 CXX test/cpp_headers/nvme_zns.o 00:40:04.234 CXX test/cpp_headers/vfio_user_spec.o 00:40:05.609 CXX test/cpp_headers/util.o 00:40:07.509 CXX test/cpp_headers/jsonrpc.o 00:40:08.885 CXX test/cpp_headers/env.o 00:40:08.885 CXX test/cpp_headers/nvmf_cmd.o 00:40:10.787 CXX test/cpp_headers/lvol.o 00:40:10.787 CC examples/blob/cli/blobcli.o 00:40:12.699 CXX test/cpp_headers/histogram_data.o 00:40:13.648 CXX test/cpp_headers/event.o 00:40:13.906 LINK blobcli 00:40:15.284 CXX test/cpp_headers/trace.o 00:40:16.659 CXX test/cpp_headers/ioat_spec.o 00:40:18.552 CXX test/cpp_headers/string.o 00:40:19.922 CXX test/cpp_headers/ublk.o 00:40:21.294 CXX test/cpp_headers/bit_array.o 00:40:22.669 CXX test/cpp_headers/scheduler.o 00:40:24.045 CXX test/cpp_headers/blob.o 00:40:25.949 CXX test/cpp_headers/gpt_spec.o 00:40:26.883 CXX test/cpp_headers/sock.o 00:40:28.280 CXX test/cpp_headers/vmd.o 00:40:29.655 CXX test/cpp_headers/rpc.o 00:40:31.552 CC test/event/event_perf/event_perf.o 00:40:32.926 LINK event_perf 00:40:34.827 CC test/lvol/esnap/esnap.o 00:40:52.901 CC test/env/pci/pci_ut.o 00:40:54.277 LINK pci_ut 00:40:59.543 LINK esnap 00:41:21.517 CC test/event/reactor/reactor.o 00:41:21.811 LINK reactor 00:41:23.190 CC test/rpc_client/rpc_client_test.o 00:41:23.190 CC test/nvme/aer/aer.o 00:41:24.124 LINK rpc_client_test 00:41:25.058 LINK aer 00:41:25.994 CC test/nvme/reset/reset.o 00:41:27.367 LINK reset 00:41:37.339 CC test/nvme/sgl/sgl.o 00:41:37.905 LINK sgl 00:41:37.905 CC test/nvme/e2edp/nvme_dp.o 00:41:39.809 LINK nvme_dp 00:41:52.023 CC test/nvme/overhead/overhead.o 00:41:52.023 CC test/nvme/err_injection/err_injection.o 00:41:52.023 LINK err_injection 00:41:52.023 LINK overhead 00:42:02.048 CC test/event/reactor_perf/reactor_perf.o 00:42:02.985 LINK reactor_perf 00:42:05.520 CC test/event/app_repeat/app_repeat.o 00:42:06.455 LINK app_repeat 00:42:21.327 CC test/nvme/startup/startup.o 00:42:21.585 LINK startup 00:42:23.491 CC test/thread/poller_perf/poller_perf.o 00:42:24.427 LINK poller_perf 00:42:34.422 CC test/nvme/reserve/reserve.o 00:42:34.680 LINK reserve 00:42:34.939 CC test/nvme/simple_copy/simple_copy.o 00:42:36.840 LINK simple_copy 00:42:44.956 CC test/nvme/connect_stress/connect_stress.o 00:42:44.956 CC test/nvme/boot_partition/boot_partition.o 00:42:44.956 LINK connect_stress 00:42:45.215 LINK boot_partition 00:42:49.422 CC test/nvme/compliance/nvme_compliance.o 00:42:50.357 CC examples/ioat/perf/perf.o 00:42:50.925 LINK nvme_compliance 00:42:51.493 LINK ioat_perf 00:42:58.064 CC app/fio/bdev/fio_plugin.o 00:43:00.602 LINK spdk_bdev 00:43:04.794 CC test/thread/lock/spdk_lock.o 00:43:08.079 CC test/event/scheduler/scheduler.o 00:43:08.647 LINK scheduler 00:43:09.215 LINK spdk_lock 00:43:13.407 CC examples/ioat/verify/verify.o 00:43:13.667 LINK verify 00:43:17.860 CC test/nvme/fused_ordering/fused_ordering.o 00:43:18.425 CC test/nvme/doorbell_aers/doorbell_aers.o 00:43:18.684 LINK fused_ordering 00:43:19.620 LINK doorbell_aers 00:43:20.557 CC test/nvme/fdp/fdp.o 00:43:22.464 LINK fdp 00:43:30.585 CC test/nvme/cuse/cuse.o 00:43:30.844 CC examples/nvme/hello_world/hello_world.o 00:43:32.225 LINK hello_world 00:43:35.519 LINK cuse 00:43:42.088 CC examples/sock/hello_world/hello_sock.o 00:43:43.026 LINK hello_sock 00:43:44.930 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:43:44.930 CC examples/nvme/reconnect/reconnect.o 00:43:45.868 LINK histogram_ut 00:43:46.127 CC test/unit/lib/accel/accel.c/accel_ut.o 00:43:47.061 LINK reconnect 00:43:52.417 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:43:55.704 LINK accel_ut 00:44:13.820 CC test/unit/lib/bdev/part.c/part_ut.o 00:44:13.820 LINK bdev_ut 00:44:16.356 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:44:16.356 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:44:16.356 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:44:16.923 LINK scsi_nvme_ut 00:44:18.300 LINK gpt_ut 00:44:19.678 LINK vbdev_lvol_ut 00:44:21.057 LINK part_ut 00:44:21.995 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:44:23.902 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:44:25.275 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:44:27.175 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:44:27.175 LINK bdev_raid_sb_ut 00:44:29.077 LINK concat_ut 00:44:29.335 LINK bdev_raid_ut 00:44:29.594 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:44:31.495 LINK raid1_ut 00:44:31.495 LINK bdev_ut 00:44:31.495 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:44:31.495 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:44:32.870 LINK bdev_zone_ut 00:44:32.870 CC examples/nvme/nvme_manage/nvme_manage.o 00:44:34.243 LINK raid5f_ut 00:44:34.826 LINK nvme_manage 00:44:37.416 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:44:37.416 CC examples/nvme/arbitration/arbitration.o 00:44:38.349 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:44:38.607 LINK arbitration 00:44:38.866 LINK vbdev_zone_block_ut 00:44:39.814 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:44:41.718 LINK blob_bdev_ut 00:44:43.093 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:44:43.093 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:44:44.029 LINK tree_ut 00:44:46.563 LINK blobfs_async_ut 00:44:47.499 CC test/unit/lib/blob/blob.c/blob_ut.o 00:44:48.435 LINK bdev_nvme_ut 00:44:48.693 CC test/unit/lib/dma/dma.c/dma_ut.o 00:44:48.693 CC test/unit/lib/event/app.c/app_ut.o 00:44:50.069 LINK dma_ut 00:44:51.007 LINK app_ut 00:44:51.266 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:44:54.558 LINK reactor_ut 00:44:54.817 CC examples/nvme/hotplug/hotplug.o 00:44:56.195 LINK hotplug 00:45:00.380 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:45:01.315 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:45:01.574 LINK ioat_ut 00:45:04.108 LINK blobfs_sync_ut 00:45:04.367 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:45:04.627 LINK blob_ut 00:45:04.885 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:45:05.143 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:45:06.078 LINK blobfs_bdev_ut 00:45:06.337 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:45:07.271 LINK conn_ut 00:45:08.206 LINK json_util_ut 00:45:09.581 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:45:10.957 LINK json_parse_ut 00:45:11.895 LINK json_write_ut 00:45:12.464 CC examples/nvme/cmb_copy/cmb_copy.o 00:45:13.432 LINK cmb_copy 00:45:17.633 CC examples/nvme/abort/abort.o 00:45:17.891 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:45:18.458 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:45:18.458 LINK abort 00:45:19.025 LINK init_grp_ut 00:45:19.025 LINK pmr_persistence 00:45:19.594 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:45:20.530 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:45:21.098 LINK jsonrpc_server_ut 00:45:21.358 CC test/unit/lib/log/log.c/log_ut.o 00:45:22.294 LINK log_ut 00:45:23.231 CC examples/vmd/lsvmd/lsvmd.o 00:45:23.799 LINK lsvmd 00:45:25.177 LINK iscsi_ut 00:45:26.555 CC examples/nvmf/nvmf/nvmf.o 00:45:27.932 LINK nvmf 00:45:30.465 CC examples/util/zipf/zipf.o 00:45:31.033 LINK zipf 00:45:34.321 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:45:39.595 LINK lvol_ut 00:45:39.595 CC test/unit/lib/iscsi/param.c/param_ut.o 00:45:41.497 LINK param_ut 00:45:49.613 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:45:50.550 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:45:50.808 CC test/unit/lib/notify/notify.c/notify_ut.o 00:45:51.067 LINK portal_grp_ut 00:45:51.326 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:45:51.892 LINK notify_ut 00:45:53.799 LINK nvme_ut 00:45:54.060 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:45:54.060 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:45:54.330 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:45:55.777 LINK nvme_ns_ut 00:45:55.777 LINK nvme_ctrlr_ocssd_cmd_ut 00:45:55.777 LINK nvme_ctrlr_cmd_ut 00:45:56.349 CC examples/vmd/led/led.o 00:45:56.349 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:45:56.349 LINK nvme_ctrlr_ut 00:45:56.610 LINK led 00:45:57.545 LINK tgt_node_ut 00:45:58.111 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:45:58.111 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:45:59.486 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:46:00.421 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:46:00.989 LINK nvme_ns_ocssd_cmd_ut 00:46:01.926 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:46:01.926 LINK nvme_ns_cmd_ut 00:46:01.926 LINK nvme_poll_group_ut 00:46:02.185 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:46:02.185 LINK nvme_pcie_ut 00:46:04.719 LINK nvme_qpair_ut 00:46:05.288 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:46:06.225 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:46:06.225 LINK tcp_ut 00:46:06.790 LINK nvme_quirks_ut 00:46:07.048 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:46:08.425 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:46:08.994 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:46:09.253 LINK nvme_transport_ut 00:46:09.819 CC examples/thread/thread/thread_ex.o 00:46:09.819 LINK nvme_io_msg_ut 00:46:10.387 LINK nvme_tcp_ut 00:46:10.387 LINK thread 00:46:11.322 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:46:12.259 LINK nvme_pcie_common_ut 00:46:13.195 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:46:13.762 LINK nvme_fabric_ut 00:46:13.762 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:46:14.330 LINK nvme_opal_ut 00:46:16.862 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:46:16.862 LINK nvme_rdma_ut 00:46:16.862 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:46:16.862 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:46:16.862 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:46:17.120 CC examples/interrupt_tgt/interrupt_tgt.o 00:46:17.120 CC examples/idxd/perf/perf.o 00:46:17.687 LINK interrupt_tgt 00:46:17.687 LINK idxd_perf 00:46:17.945 LINK nvme_cuse_ut 00:46:18.881 LINK ctrlr_ut 00:46:18.881 LINK ctrlr_discovery_ut 00:46:19.447 LINK subsystem_ut 00:46:26.009 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:46:26.009 CC test/unit/lib/sock/sock.c/sock_ut.o 00:46:26.944 LINK dev_ut 00:46:28.322 CC test/unit/lib/sock/posix.c/posix_ut.o 00:46:29.699 LINK sock_ut 00:46:30.690 LINK posix_ut 00:46:31.257 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:46:31.516 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:46:32.891 LINK ctrlr_bdev_ut 00:46:33.151 LINK lun_ut 00:46:35.054 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:46:35.313 LINK scsi_ut 00:46:37.217 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:46:38.153 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:46:38.411 CC test/unit/lib/thread/thread.c/thread_ut.o 00:46:39.347 LINK scsi_bdev_ut 00:46:39.347 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:46:39.347 LINK scsi_pr_ut 00:46:40.283 LINK iobuf_ut 00:46:41.219 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:46:41.478 CC test/unit/lib/util/base64.c/base64_ut.o 00:46:42.045 LINK thread_ut 00:46:42.045 LINK base64_ut 00:46:43.948 LINK nvmf_ut 00:46:46.481 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:46:46.481 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:46:47.048 LINK cpuset_ut 00:46:47.614 LINK bit_array_ut 00:46:47.871 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:46:48.437 LINK crc16_ut 00:46:49.815 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:46:50.075 LINK crc32_ieee_ut 00:46:50.334 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:46:50.334 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:46:50.334 CC test/unit/lib/util/dif.c/dif_ut.o 00:46:50.593 LINK crc32c_ut 00:46:50.593 CC test/unit/lib/util/iov.c/iov_ut.o 00:46:50.852 LINK crc64_ut 00:46:51.110 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:46:51.110 LINK iov_ut 00:46:52.046 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:46:52.046 LINK dif_ut 00:46:52.305 CC test/unit/lib/util/math.c/math_ut.o 00:46:52.305 CC test/unit/lib/util/string.c/string_ut.o 00:46:52.563 LINK math_ut 00:46:52.564 LINK pipe_ut 00:46:53.500 LINK string_ut 00:46:53.500 CC test/unit/lib/util/xor.c/xor_ut.o 00:46:54.436 LINK xor_ut 00:46:54.436 LINK rdma_ut 00:46:54.694 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:46:54.952 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:46:55.518 LINK pci_event_ut 00:46:55.775 LINK subsystem_ut 00:46:57.151 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:46:57.151 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:46:57.409 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:46:57.977 LINK rpc_ut 00:46:58.236 LINK idxd_user_ut 00:46:58.495 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:46:59.432 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:47:00.369 LINK idxd_ut 00:47:01.810 LINK transport_ut 00:47:01.810 CC test/unit/lib/rdma/common.c/common_ut.o 00:47:02.067 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:47:02.067 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:47:02.642 LINK vhost_ut 00:47:02.643 LINK ftl_l2p_ut 00:47:02.643 LINK common_ut 00:47:03.211 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:47:03.211 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:47:03.470 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:47:03.470 LINK ftl_band_ut 00:47:04.039 LINK ftl_bitmap_ut 00:47:04.039 LINK ftl_mempool_ut 00:47:04.298 LINK ftl_io_ut 00:47:04.558 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:47:05.495 LINK ftl_mngt_ut 00:47:05.755 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:47:06.321 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:47:07.259 LINK ftl_sb_ut 00:47:07.828 LINK ftl_layout_upgrade_ut 00:48:04.059 17:03:38 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:48:04.060 make[1]: Nothing to be done for 'clean'. 00:48:05.959 17:03:42 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:48:05.959 17:03:42 -- common/autotest_common.sh@718 -- $ xtrace_disable 00:48:05.959 17:03:42 -- common/autotest_common.sh@10 -- $ set +x 00:48:05.959 17:03:42 -- spdk/autopackage.sh@48 -- $ timing_finish 00:48:05.959 17:03:42 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:05.959 17:03:42 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:48:05.960 17:03:42 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:05.960 + [[ -n 2341 ]] 00:48:05.960 + sudo kill 2341 00:48:05.960 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:48:05.969 [Pipeline] } 00:48:05.988 [Pipeline] // timeout 00:48:05.993 [Pipeline] } 00:48:06.010 [Pipeline] // stage 00:48:06.015 [Pipeline] } 00:48:06.032 [Pipeline] // catchError 00:48:06.040 [Pipeline] stage 00:48:06.042 [Pipeline] { (Stop VM) 00:48:06.055 [Pipeline] sh 00:48:06.333 + vagrant halt 00:48:09.616 ==> default: Halting domain... 00:48:17.744 [Pipeline] sh 00:48:18.022 + vagrant destroy -f 00:48:20.556 ==> default: Removing domain... 00:48:21.946 [Pipeline] sh 00:48:22.227 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest/output 00:48:22.238 [Pipeline] } 00:48:22.263 [Pipeline] // stage 00:48:22.270 [Pipeline] } 00:48:22.288 [Pipeline] // dir 00:48:22.294 [Pipeline] } 00:48:22.310 [Pipeline] // wrap 00:48:22.317 [Pipeline] } 00:48:22.331 [Pipeline] // catchError 00:48:22.341 [Pipeline] stage 00:48:22.343 [Pipeline] { (Epilogue) 00:48:22.357 [Pipeline] sh 00:48:22.636 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:37.585 [Pipeline] catchError 00:48:37.587 [Pipeline] { 00:48:37.604 [Pipeline] sh 00:48:37.888 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:37.888 Artifacts sizes are good 00:48:37.898 [Pipeline] } 00:48:37.917 [Pipeline] // catchError 00:48:37.929 [Pipeline] archiveArtifacts 00:48:37.937 Archiving artifacts 00:48:38.272 [Pipeline] cleanWs 00:48:38.283 [WS-CLEANUP] Deleting project workspace... 00:48:38.283 [WS-CLEANUP] Deferred wipeout is used... 00:48:38.299 [WS-CLEANUP] done 00:48:38.301 [Pipeline] } 00:48:38.318 [Pipeline] // stage 00:48:38.323 [Pipeline] } 00:48:38.338 [Pipeline] // node 00:48:38.344 [Pipeline] End of Pipeline 00:48:38.377 Finished: SUCCESS